Next Article in Journal
Impact of GLP-1 Receptor Agonists in Gastrointestinal Endoscopy: An Updated Review
Next Article in Special Issue
Fetal Isolated Single Umbilical Artery (ISUA) and Its Role as a Marker of Adverse Perinatal Outcomes
Previous Article in Journal
Thoracic Fluid Content as an Indicator of High Intravenous Diuretic Requirements in Hospitalized Patients with Decompensated Heart Failure
Previous Article in Special Issue
Prenatal Diagnosis, Course and Outcome of Patients with Truncus Arteriosus Communis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancements in Artificial Intelligence for Fetal Neurosonography: A Comprehensive Review

by
Jan Weichert
1,2,* and
Jann Lennard Scharf
1
1
Division of Prenatal Medicine, Department of Gynecology and Obstetrics, University Hospital of Schleswig-Holstein, Ratzeburger Allee 160, 23538 Luebeck, Germany
2
Elbe Center of Prenatal Medicine and Human Genetics, Willy-Brandt-Str. 1, 20457 Hamburg, Germany
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2024, 13(18), 5626; https://doi.org/10.3390/jcm13185626
Submission received: 30 July 2024 / Revised: 4 September 2024 / Accepted: 16 September 2024 / Published: 22 September 2024
(This article belongs to the Special Issue Update on Prenatal Diagnosis and Maternal Fetal Medicine: 2nd Edition)

Abstract

:
The detailed sonographic assessment of the fetal neuroanatomy plays a crucial role in prenatal diagnosis, providing valuable insights into timely, well-coordinated fetal brain development and detecting even subtle anomalies that may impact neurodevelopmental outcomes. With recent advancements in artificial intelligence (AI) in general and medical imaging in particular, there has been growing interest in leveraging AI techniques to enhance the accuracy, efficiency, and clinical utility of fetal neurosonography. The paramount objective of this focusing review is to discuss the latest developments in AI applications in this field, focusing on image analysis, the automation of measurements, prediction models of neurodevelopmental outcomes, visualization techniques, and their integration into clinical routine.

1. Introduction

The assessment of the anatomic integrity of the fetal central nervous system (CNS) is one of the most challenging tasks during a prenatal sonographic work-up, as the brain’s development and maturation constitute complex and well-orchestrated processes occurring at various embryonic and fetal stages. To preclude diagnostic errors, national and international guidelines explicitly pay attention to the fact that the appearance of the brain undergoes profound changes throughout gestation. Although brain anomalies are among of the most common fetal malformations, with an estimated prevalence of 9.8–14 per 10,000 live births [1,2], their in utero perception fundamentally requires a familiarity with sonographic brain anatomy and artifacts and a designated vigilance for the necessity of a subsequent targeted multiplanar assessment of the entire fetal CNS (neurosonography) [3,4]. In general, the efficacy of ultrasound (US) screenings largely hinges on the operator’s skill in navigating to and reproducing standard imaging planes and this, in turn, strongly relates to the gestational age (GA) at examination. In this context, it could be noted that, in the recent past, the majority of severe congenital brain anomalies have been readily identified prenatally by applying a systematic, protocol-based US survey [5,6]. Nevertheless, the detection rates of fetal brain lesions in an unselected population remain somewhat unsatisfactory and more subtle changes might escape an early diagnosis. In part, this might be explained by the fact that even though advanced technologies such as three-dimensional US (3DUS) undoubtedly have the potential to contribute to an improved detailed CNS evaluation, there is still little consensus as to the ideal method for volume acquisition, the settings, and the analysis of the volume and an overall lack in the standardization of volumetric assessments [7]. On the other hand, DiMascio et al. stated in their systematic review that fetal brain charts suffer substantially from poor methodologies and are at high risk of biases, especially when focusing on relevant neurosonographic issues [8]. In addition, another publication demonstrated that the fetal cortical brain’s development in fetuses conceived by assisted reproductive technology seems to be different from those conceived spontaneously, as expressed by a reduced sulci depth [9]. This underpins the complexity of an all-encompassing thorough assessment of the fetal brain. Beyond any doubt, prenatal US is capable of providing precise information regarding fetal anatomical integrity and the severity of abnormal conditions derived from high-quality images with increased diagnostic accuracy and reliability. The transabdominal route remains the technique of choice for a comprehensive anatomic evaluation of specific organs like the fetal brain. This clearly demonstrates that the currently available data source of images has to deal with a combination of maternal, fetal, technical, environmental, and acoustic factors hampering image clarity and data acquisition to eventually establish precise antenatal diagnoses.
Current research approaches regarding the clinical applicability of artificial intelligence (AI)-assisted methods in the context of fetal neurosonography (beyond the first trimester) are heterogeneous and, with a few exceptions, software solutions that are of use in clinical routine are rather rare. However, several promising research topics in this field have emerged. These mainly include (among others) the optimized (automated) acquisition of standard 2D planes with the correct orientation and localization within a 3DUS volume, a simplified workflow, the automated recognition of crucial CNS and bony structures (as landmarks) and the subsequent detection of anomalies, the evaluation of image quality and the assessment of GA by evaluating neurodevelopmental maturation [10,11,12,13,14].
This review aims to summarize our current knowledge about the potential diagnostic targets for AI algorithms in the assessment of the fetal brain in a clinical context and highlights why AI applications are increasingly being integrated into prenatal US interrogations and their practical added value.

2. AI in Prenatal Diagnosis

In the very recent past, we witnessed a tidal wave of artificial intelligence and its computational applications in healthcare in general and in medical image analyses in particular. In 2022, Dhombres et al. published a systematic review regarding the actual contributions of AI reported in obstetrics and gynecology (OB/GYN) journals. In detail, most articles covered method/algorithm development (53%, 35/66), hypothesis generation (42%, 28/66), or software development (3%, 2/66). Validation was performed on one dataset (86%, 57/66), while no external validation was reported [15].
Machine learning (ML) is a powerful set of computational tools that learn from large (structured) datasets and train models on descriptive patterns, which subsequently apply the knowledge acquired to solve the same task in new situations. Although ML algorithms are presently being widely deployed in medicine, expanding diagnostic and clinical tools to augment iterative, time-consuming, and resource-intensive processes and streamline workflows, considerable human supervision is needed. AI models that use deep learning architectures (DL; a subdomain of ML), which predominantly leverage large-scale neural networks mimicking silicon circuit synapses, tend to outperform traditional machine learning methods in complex tasks and constitute the most suitable methodology for image analysis. Based on a detailed scoping review dealing with the most-cited papers using DL in the literature from 2015 to 2021, the number of surveyed publications on segmentation, detection, classification, registration, and characterization tasks comprised 30, 20, 30, 10, and 10 percent, respectively [16]. It is of note that the quality of obstetric US screening images is crucial for clinical downstream tasks, including the assessment of fetal growth and development, in utero compromising, the prediction of preterm birth, and the detection of fetal anomalies. It is now widely recognized, by leading US equipment manufacturers and most of the experts in this field, that there are clear benefits to utilizing AI technologies for US imaging in prenatal diagnostics. A multitude of convolutional neural network (CNN)-based AI applications in US imaging have showcased that AI models can achieve a comparable performance to clinicians in obtaining the appropriate diagnostic image planes, applying appropriate fetal biometric measurements, and accurately assessing abnormal fetal conditions [10,11,12,13,17,18].
As accurate head measurements are of crucial importance in prenatal and obstetrical ultrasound surveillance, a plethora of automatic methods for fetal head analyses have been proposed. Most studies have focused preferentially on the size and shape of the bony skull—excluding its internal structures—while solely applying head detection methods (e.g., object (skull) detection using bounding boxes, segmentation methods ± ellipse fitting, and edge-based and contour-based methods). Torres et al. published an excellent comprehensive state-of-the-art review tabulating more than 100 published papers on computational methods for fetal head, brain, and standard plane analyses using US images [13]. Moreover, their survey also summarized the image enhancement protocols of US images, including methods that find the fetal head aligned to a coordinate system, compounding approaches, and US and multimodal registration methods. The authors provided an exhaustive analysis of each method based on its clinical application and theoretical approach and in their concluding remarks they stated—despite the fact that a multitude of distinct image processing methods have been developed in the recent past (which mainly comprise deep learning approaches)—that there is a need for new architectures to boost the performance of these methods. A strong database is seen as an indispensable prerequisite, which reinforces the need for public US benchmarks and for the development of approaches that deal with limited data (e.g., transfer learning approaches). On the other hand, more effort should be made in AI research to develop methods to segment the head in 3D images, as well as methods that can reliably detect abnormalities or (even subtle) lesions within the fetal brain. Accordingly, Ramirez Zegarra and Ghi suggested an ideal AI setting which clearly addresses the urgent need for more multitasking DL models that are trained for the detection of fetal standard planes, the identification of fetal anatomical structures, and the performance of automatic measurements that in turn will consequently be able to generate alarm messages in the event of malformations [19].
In fact, in the last decade, several AI-related scientific studies have been conducted to improve the quality of prenatal diagnoses by focusing on three major issues: (I) the detection of anomalies, fetal measurements, scanning planes, and the heartbeat; (II) the segmentation of fetal anatomic structures in still US frames and videos; and (III) the classification of standard fetal diagnostic planes, congenital anomalies, biometric measures, and fetal facial expressions [20].
Various researchers were able to develop algorithms that were able to reproducibly quantify biometric parameters with high accuracy. Some of them will be discussed critically below. On the other hand, quite a number of AI models were trained with inadequate and/or insufficiently labeled samples, which led to overfitting problems and performance degradation [14]. ‘All models are wrong, but some are useful’, is an aphorism on the subject of statistics coined by George E.P. Box more than 50 years ago, and one that describes the general dilemma in the targeted application of computational modeling approaches and AI solutions (and not only in the past) [21].

3. AI in Fetal Neurosonography

By adopting experience from the use of automated techniques in fetal cardiac assessments, further refinements in AI algorithms or the development of anomaly specific learning algorithms could help achieve more granular detections of unique CNS lesions [22]. This has the potential to risk stratifying certain fetal populations. But, as a matter of fact, it has to be acknowledged that algorithms developed for fetal imaging recognition require a larger database compared to other AI algorithms, due to the similar appearance of different US planes [19]. However, it should be noted that the currently available and clinically approved approaches for the extensive processing of three-dimensional datasets of the fetal CNS only sparsely exploit the diagnostic potential of volume US. In fact, it is currently not feasible to perform both simple and more complex tasks simultaneously, such as an assessment of the total brain volume, rendering the brain surface and slicing and displaying all diagnostic sectional planes (in accordance with the ISUOG guidelines), in addition to a multiplanar image display using the same 3D volume (in a vendor-independent manner). This applies to both conventional tools and AI frameworks and does not appear to be readily explainable in the light of the existing scientific literature, with its generally highly complex AI pipelines. In this regard, developers and engineers, on the one hand, and clinicians, on the other, should work together more intensively to find relevant integrative volume-based solutions for clinical routine as quickly as possible.
In 2023, an international research group developed a normative digital atlas of fetal brain maturation based on a prospective international cohort (INTERGROWTH-21st Project) and using more than 2500 serially acquired 3D fetal brain volumes [23]. In the preparation of this fully functional digital brain atlas, the authors proposed an end-to-end, multi-task CNN that both extracts and aligns the fetal brain from original 3D US scans with a high degree of accuracy and reliability (Brain Extraction and Alignment Network; BEAN) [24,25,26]. These steps were necessary (as in most neuroimage analysis pipelines) to enhance the visibility of the brain structures within the 3D US templates and significantly reduce the amount of extra-cranial volume information processed and, lastly, overcome the positional variation of the brain inside the scan volume. From the authors’ perspective, there is no doubt that the introduction of computerized human body atlases either based on US or MRI image data (as published earlier by Gholinpour et al. [27]) will contribute to our understanding of fetal developmental processes in general and brain maturation in particular by providing rich contextual information of our inherently 3D (CNS) anatomy.
The clinical applicability of semiautomatic volumetric approaches, in terms of a detailed reconstruction of the diagnostic planes of the fetal brain, has been validated in previous studies [28,29,30]. Very recently, a 3D UNet-based network for the 3D segmentation of the entire CNS, using intelligent navigation to locate CNS planes within the 3D volume, was introduced (fully automated 5DCNS+™). While applying this tool, our group was able to show that CNS volume datasets (whose acquisition was from an axial transthalamic plane) could readily be reconstructed into a nine-view template in less than 12 s on average, facilitating the generation of a complete neurosonogram with high accuracy, efficiency, and reduced operator-dependency, confirming previous findings.
Lu et al. reported on an automated software (Smart ICV™) that was able to calculate the entire fetal brain volume retrieved from 3DUS volume data. This novel technique showed a high intra- and inter-observer intra-class coefficient (0.996 and 0.995, respectively) and high degree of reliability compared to a manual approach using Virtual Organ Computer-aided AnaLysis (VOCAL™) [31]. An overview of the current AI-driven algorithms with either a clinical or pre-clinical context is given in Table 1.
DL algorithms have become the methodology of choice for imaging analyses [16,18,32,33]. DL models are capable of overcoming US-image-related challenges including inhomogeneities, (shadowing) artifacts, poor contrast, intra- and inter-clinician data acquisition, and measurement variability. Fiorentino and co-workers categorized published work in the field of fetal US image analysis that used a plethora of different DL algorithms. Their review surveyed more than 150 research papers to elaborate the most investigated tasks addressed using DL in this field [18]. The authors could demonstrate that fetal standard plane (SP) detection (19.6%) and fetal biometry estimations (20.9%) were among the most prevalent tasks. The fetal CNS and heart were the most explored structures in standard plane detection, while the fetal head circumference was the most frequently investigated measurement in biometry estimations. In 49% of papers, researchers trained DL pipelines for anatomical structure analyses. The most studied anatomical structures were the heart and brain, contributing to 26.7% and 20.0% of the surveyed papers, respectively. In case of the latter, the analysis is performed on both 2D and, more recently, 3D images to assess the brain’s development and localization, structure segmentation, and GA estimation. The challenges to be addressed regarding AI in image analyses, in general, comprise the limited availability of (multi-expert) image annotation; the limited robustness of DL algorithms due to the lack of large training datasets (interestingly, only a minority of DL studies use data from routine clinical care); the inconsistent use of both performance metrics and testing datasets, hampering a fair comparison between different algorithms; and the scarcity of research proposing semi-, weakly, or self-supervised approaches [10,18,32].
Table 1. Neurosonographic studies related to artificial intelligence.
Table 1. Neurosonographic studies related to artificial intelligence.
Reference, YearCountryGA (wks)Study Size (n) *Data
Source
Type of
Method
Purpose
/Target
TaskDescription of AIClinical Value ***
Rizzo et al., 2016 [34]I21 (mean)1203Dn. s.SFHP (axial)
biometry
automated recognition of axial planes from 3D volumes5D CNS software++
Rizzo et al., 2016 [35] **I18–241833Dn. s.SFHP (axial/
sagittal/coronal)
biometry
evaluation of efficacy in reconstructing CNS planes in healthy and abnormal fetuses5D CNS+ software+++
Ambroise-Grandjean et al.,
2018 [36]
F17–30303Dn. s.SFHP (axial)
biometry (TT, TC)
automated identification of axial from 3DUS and measurement BPD and HCSmartPlanes CNS++
Welp et al., 2020 [30] **D15–3611103Dn. s.SFHP (axial/
sagittal/coronal)
biometry
validating of a volumetric approach for the detailed assessment of the fetal brain5D CNS+ software+++
Pluym et al., 2021 [37]USA18–221433Dn. s.SFHP (axial)
biometry
evaluation of accuracy of automated 3DUS for fetal intracranial measurementsSonoCNS software++
Welp et al., 2022 [29] **D16–35913Dn. s.SFHP/anomalies
biometry
evaluation of accuracy and reliability of a volumetric approach in abnormal CNSs5D CNS+ software+++
Gembicki et al., 2023 [28] **D18–361293Dn. s.SFHP (axial/
sagittal/coronal)
biometry
evaluation of accuracy and efficacy of AI-assisted biometric measurements of the fetal CNS5D CNS+ software,
SonoCNS software
++
Han et al., 2024 [38]CHN18–426422DDLBiometry
(incl. HC, BPD, FOD, CER, CM, Vp)
automated measurement and quality assessment of nine biometric parameters CUPID software++
Yaqub et al., 2012 [39]UK19–24303DMLmulti-structure detectionlocalization of four local brain structures in 3D US imagesRandom Forest Classifier++
Cuingnet et al., 2013 [40]UK19–2478 volumes3DMLSFHPfully automatic method to detect and align fetal heads in 3DUSRandom Forest Classifier,
Template deformation
++
Sofka et al., 2014 [41]CZ16–352089 volumes3DMLSFHPautomatic detection and measurement of structures in CNS volumesIntegrated Detection Network (IDN)/FNN+
Namburete et al., 2015 [42]UK 18–341873DMLsulcation/gyrationGA predictionRegression Forest Classifier++
Yaqub et al., 2015 [43]UK19–24403DMLSFHPextraction and categorization of unlabeled fetal US imagesRandom Forest Classifier+
Baumgartner et al., 2016 [44]UK18–222012DDLSFHP (TT, TC)retrieval of standard planes, creation of saliency maps to extract bounding boxes of CNS anatomyCNN+++
Sridar et al., 2016 [45]IND18–20852DDLstructure detectionimage classification and structure localization in US imagesCNN+
Yaqub et al., 2017 [46]UK19–24403DDLSFHP,
CNS anomalies
localization of CNS, structure detection, pattern learning Random Forest Classifier+
Qu et al., 2017 [47]CHN16–341552DDLSFHPautomated recognition of six standard CNS planesCNN,
Domain Transfer Learning
++
Namburete et al., 2018 [25]UK18–34739 images2D/3DDLstructure detection3D brain localization, structural segmentation and alignmentmulti-task CNN++
Huang et al., 2018 [48]CHN20–29285 3DDLmulti-structure detectiondetection of CNS structures in 3DUS and measurements of CER/CMVP-Net++
Huang et al., 2018 [49]UK20–30339 images2DDLstructure detection (CC/CP)standardize intracranial anatomy and measurementsRegion descriptor,
Boosting classifier
++
van den Heuvel et al., 2018 [50]NL10–401334 images2DMLbiometry (HC)automated measurement of fetal head circumferenceRandom Forest Classifier
Hough transform
+
Dou et al., 2019 [51]CHN19–31430 volumes3DMLSFHP/structure detectionautomated localization of fetal brain standard planes in 3DUSReinforcement learning++
Sahli et al., 2019 [52]TUNn/a862DMLSFHPautomated extraction of biometric measurements and classification of normal/abnormalSVM Classifier++
Alansary et al., 2019 [53]UKn/a723DML/DLSFHP/structure detectionlocalization of target landmarks in medical scansReinforcement learning
deep Q-Net
+
Lin et al., 2019 [54]CHN14–281771 images2DDLSFHP/structure detectionautomated localization of six landmarks and quality assessments MF R-CNN+
Bastiaansen et al., 2020 [55]NL1st trimester302D/3DDLSFHP (TT)fully automated spatial alignment and segmentation of embryonic brains in 3D USCNN+
Xu et al., 2020 [56]CHN2nd/3rd
trimester
3000 images2DDLSFHPsimulation of realistic 3rd- from 2nd-trimester imagesCycle-GAN++
Ramos et al., 2020 [57]MEXn/a78 images2DDLSFHP
biometry (TC)
GA prediction
detection and localization of cerebellum in US images, biometry for GA predictionYOLO+
Maraci et al., 2020 [58]UK2nd trim8736 images2DDLbiometry (TC)
GA prediction
estimation of GA through automatic detection and measurement of the TCDCNN+
Chen et al., 2020 [59]CHNn/a2900 images2DDLSFHP
biometry (TV)
demonstrate the superior performance of DL pipeline over manual measurementsMask R-CNN
ResNet50
+
Xie et al., 2020 [60]CHN18–3292,7482DDLSFHP (TV, TC)
CNS anomalies
image classification as normal or abnormal, segmentation of craniocerebral regionsU-Net
VGG-Net
++
Xie et al., 2020 [61]CHN22–2612,7802DDLSFHP,
CNS anomalies
binary image classification as normal or abnormal in standard axial planesCNN++
Zeng et al., 2021 [62]CHNn/a1354 images2DDLbiometryimage segmentation for automatic HC biometryDAG V-Net+
Burgos Artizzu et al., 2021 [63]ESP16–4212,400 images
(6041 CNS)
2DDL/MLSFHPevaluation of the maturity of current DL classifications tested in a real clinical environment19 different CNNs
MC Boosting algorithm
HOG classifier
++
Gofer et al., 2021 [64]IL12–1480 images2DMLSFHP/structure detection (CP)classification of 1st trimester CNS US images and earlier diagnosis of fetal brain abnormalitiesStatistical Region Merging
Trainable Weka Segmentation
+
Skelton et al., 2021 [65]UK20–32482D/3DDLSFHPassessment of image quality of CNS planes automatically extracted from 3D volumesIterative Transformation Network (ITN)++
Fiorentino et al., 2021 [66]I10–401334 images2DDLbiometry (HC)head localization and centeringmulti-task CNN++
Yeung et al., 2021 [67]UK18–2265 volumes2D/3DDLSFHP/structure detectionmapping 2D US images into 3D space with minimal annotationCNN
Montero et al., 2021 [68]ESP18–408747 images2DDLSFHPgeneration of synthetic US images via GANs and improvement of SFHP classificationStyle-GAN++
Moccia et al., 2021 [69]I10–401334 images2DDLbiometry (HC)fully automated method for HC delineationMask-R2CNN+
Wyburd et al., 2021 [70]UK19–30811 images3DDLstructure detection/
GA prediction
automated method to predict GA by cortical developmentVGG-Net
ResNet-18
ResNet-10
++
Shu et al., 2022 [71]CHN18–26959 images2DDLSFHP (TC)automated segmentation of the cerebellum, comparison with other algorithmsECAU-Net +
Hesse et al., 2022 [72]UK18–26278 images3DDLstructure detectionautomated segmentation of four CNS landmarksCNN+++
Di Vece et al., 2022 [73]UK20–256 volumes2DDLSFHP/structure detectionestimation of the 6D pose of arbitrarily oriented US planesResNet-18++
Lin et al., 2022 [74]CHN18–4016,297/1662DDLstructure detectiondetection of different patterns of CNS anomalies in standard planesPAICS
YOLOv3
+++
Sreelakshmy et al., 2022 [75] ‡IND18–20740 images2DDLbiometry (TC)cerebellum segmentation from fetal brain images ResU-Net-
Yu et al., 2022 [56]CHNn/a3200 images2D/3DDLSFHPautomated generation of coronal and sagittal SPs from axial planes derived from 3DVolRL-Net++
Alzubaidi et al., 2022 [76]QTAR18–405512DDLbiometry (HC)GA and EFW prediction based on fetal head imagesCNN, Ensemble Transfer Learning++
Coronado-Gutiérrez et al.,
2023 [77]
ESP18–2412,400 images2DDLSFHP, multi-structure delineationautomated measurement of brain structuresDeepLab CNNs++
Ghabri et al., 2023 [20]TNn/a8962DDLSFHPclassify fetal planes/accurate fetal organ classificationCNN: DenseNet169++
Lin et al., 2023 [78]CHNn/a558 (709 (images/videos)2DDLSFHPimproved detection efficacy of fetal intracranial malformationsPAICS
YOLO
+++
Rauf et al., 2023 [79]PKn.s.n.s.2DDLSFHPBayesian optimization for the classification of brain and common maternal fetal ultrasound planesBottleneck residual CNN+
Alzubaidi et al., 2023 [80]QTAR18–403832 images2DDLSFHPevaluation of a large-scale annotation dataset for head biometry in US images multi-task CNN+
Alzubaidi et al., 2024 [81]QTAR18–403832 images
(20,692 images)
2DDLbiometryadvanced segmentation
techniques for head biometrics
in US imagery
FetSAM
Prompt-based Learning
+
Di Vece et al., 2024 [82]UK20–256 volumes2D/3DDLSFHP (TV)detection and segmentation of the brain; plane pose regression; measurement of proximity to target SP ResNet-18++
Yeung et al., 2024 [83]UK19–21128,256 images2DDLSFHPreconstruction of brain volumes from freehand 2D US sequencesPlaneInVol
ImplicitVol
++
Dubey et al., 2024 [84]IND10–401334 images2DDLbiometry (HC)automated head segmentation and HC measurementDR-ASPnet,
Robust Ellipse Fitting
++
Clinically validated (and commercially available) software in gray shaded rows. Abbreviations: 2D, two dimensional; 3D, three dimensional; BPD, biparietal diameter; CER, cerebellum; CNN, convolutional neural network; CNS, central nervous system; CP, choroid plexus; CSP, cavum septum pellucidum; DL, deep learning; FOD, fronto-occipital diameter; GA, gestational age; GAN, generative adversarial network; HC, head circumference; LV, lateral ventricles; n/a, not applicable; n.s., not specified; PAICS, prenatal ultrasound diagnosis artificial intelligence conduct system; ResNet, residual neural network; SFHP, standard fetal head plane; SVM, support vector machine; TC, transcerebellar plane; TV, transventricular plane; TT, transthalamic plane; US, ultrasound; Vp, width of the posterior horn of the lateral ventricle; YOLO, You Only Look Once algorithm; * if not otherwise specified: number of patients; ** fully automated AI-driven software update has been released; *** potential clinical impact; ‡ withdrawn article.

3.1. AI in GA Prediction

Reliable methods for accurate GA estimations in the second and third trimester of pregnancy remain an unsolved challenge in obstetrics. This might be due to late booking, infrequent access to prenatal care, the unavailability of early US examinations, and other reasons [63]. Namburete et al. introduced a model which was able to characterize neuroanatomical appearance, both spatially and temporally, while identifying relevant brain regions, such as the Sylvian fissure and the cingulate and callosal sulci, as important image regions in the GA discrimination task [42]. The authors additionally extended this to clinically relevant metadata like the head circumference’s canonical feature set (e.g., Haar-like features) to capture structural changes within the fetal brain. The algorithm improved the confidence of age predictions provided by the clinical HC method by ±0.64 days and ±4.57 days in the second and third trimesters, respectively. A similar approach estimates GA from standard transthalamic axial plane images using a supervised DL model (quantusGA) that automatically detects the position and orientation of the fetal brain by detecting the skull and five internal key points (it is necessary to crop and rotate the brain, resulting in a horizontally aligned brain image). The model then extracts textural and size information from the brain pixels and uses this information to generate an estimate of the respective GA, with a similar or even lower error compared to fetal biometric parameters, especially in the third trimester [63]. AI models are capable of the estimation of GA with an accuracy comparable to that of trained sonographers conducting a standard fetal biometry (e.g., the fetal head), as the results of a recent study suggest. The authors trained a DL algorithm to estimate GA from blind US sweeps and showed that the model’s performance appears to extend to blind sweeps collected by untrained providers in low-resource settings [84]. Similar results were demonstrated by two groups, where ML-based algorithms outperformed current ultrasound-based clinical biometry in GA prediction, with a mean absolute error of 3.0 and 4.3 days [85] or 1.51 days (using an ensemble model of both unlabeled images and video data) in second- and third-trimester fetuses [86].

3.2. AI Used for Augmenting Fetal Pose Estimations and CNS Anomaly Assessments

In the recent past, CNNs and other deep learning architectures were trained to recognize and predict fetal poses from imaging data [25,66,87,88]. In contrast to already established methods, which were mainly designed for standard plane identification, assuming that a good US image quality had already been achieved with the fetus in a proper position, and, therefore, only used to assist in prenatal image analyses, a study group from the UK recently emphasized the utility of recognizing the probe’s proximity to diagnostic CNS planes, facilitating earlier and more precise adjustments during 2D US scanning. This semi-supervised segmentation and classification model used an 18-layer residual CNN (ResNet-18) that was trained on both labeled standard planes and unlabeled 3D US volume slices to filter out frames lacking the brain and to generate masks for those containing it, enhancing the relevance of plane pose regression in a clinical setting [81]. In a previous study, the authors applied a similar 18-layer residual CNN as the backbone for feature extraction (with pre-trained ImageNet weights) and 6D pose prediction (which refers to the task of determining the sixth degree-of-freedom pose of an object in 3D space) of arbitrarily oriented planes, slicing the fetal brain’s US volume without the need for ground truth data in real time or 3D volume scans of the fetus beforehand [72].
Yeung and colleagues proposed an algorithm for the more general task of predicting the location of any arbitrary 2D US plane of the fetal brain in a pre-defined 3D space [66]. In their work, they demonstrated that, based on extensive data augmentation and complementary information from training volumes acquired at different orientations, the prediction made by a novel CNN model was generalizable to real 2D US acquisitions and videos, despite the model having only been trained with artificially sampled 2D slices. Considering that 3D volumes provide more effective spatial information and exhibit higher degrees of freedom (DoF), increased variations in fetal poses make the proper education of these algorithms challenging. In fact, 6D fetal pose estimation refers to the process of determining the six degrees of freedom (6DoF) position including three translational (position) and three rotational (orientation) parameters, allowing for a comprehensive understanding of the fetus’s spatial position in a coordinate system and movement in utero. The study conducted by Chen and co-workers dealt with a similar topic—fetal pose estimations in 3D US. The authors introduced a novel 3D fetal pose estimation framework (FetusMapV2) which was able to identify a set of 22 anatomical landmarks for first- and early second-trimester fetuses, and their specific connections, to provide a comprehensive and systematic representation of the fetal pose in 3D space and to overcome challenging issues such as poor image quality, limited GPU memory for tackling high-dimensional data, symmetrical or ambiguous anatomical structures, and considerable variations in fetal poses [87].
Xu and co-workers trained a DL algorithm (a cycle-consistent adversarial network (Cycle-GAN)) to simulate realistic fetal neurosonography images and specifically to generate third-trimester US images from second-trimester images that were qualitatively evaluated by experienced sonographers [56]. The vast majority (84.2%) of the simulated third-trimester images could not be distinguished from real third-trimester images in this study. These generative adversarial networks (GANs), first introduced by Goodfellow et al. in 2014, are algorithmic architectures that use two neural networks, which compete against each other and learn to generate new, synthetic instances of data, with a probabilistic model, that can pass for real data/images, augmenting existing datasets for training DL models [89,90]. Generative approaches can better handle missing data in multi-modal datasets by generating the missing image information and preserving the sample size, thereby boosting downstream classification performances [91,92]. GANs might also assist in analyzing abnormal fetal anatomical structures (e.g., CNS anomalies) while also considering the corresponding GA information (there are a wide range of physiological changes among trimesters leading to marked inter- and intra-organ variability) [18]. A recent research paper introduced a state-of-the-art framework (FetalBrainAwareNet) that leverages an image-to-image translation algorithm and utilizes class activation maps (CAMs) as prior information in its conditional adversarial training process, making it capable of producing more realistic synthetic images, resulting, according to the authors, in a greater clinical relevance than similar experimental approaches [56,67,93,94]. The uniqueness of this approach was the incorporation of anatomy-aware regularization terms—one ensuring the generation of elliptical fetal skulls, while another was crucial for refining and distinctly differentiating key anatomical landmarks (e.g., cerebellum, thalami, cavum septi pellucida, lateral ventricles)—in each particular fetal head standard plane (FHSP).
In US imaging, the presence of speckle noise degrades the signal-to-noise of US images; traditional image denoising algorithms often fail to fully reduce speckle noise and retain the image’s features. A recently proposed GAN based on U-Net with residual dense connectivity (GAN-RW) achieved the most advanced despeckling performance on US images (e.g., of the fetal head) in terms of its peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and subjective visual effect [95]. Yeung et al. proposed a novel framework (ImplicitVol), a sensor-free approach to reconstructing 3D US volumes from a sparse set of 2D images with deep implicit representation. The authors stated that their algorithm outperformed conventional approaches in terms of the image quality of the reconstructed template, as well as the refinement of its spatial 3D localization, which underscored its additional potential in slice-to-volume registration [82]. The latter refers to the vital technique in medical imaging that transforms 2D slices into a cohesive 3D volume, thereby enhancing our ability to visualize and analyze complex anatomical structures, leading to optimized diagnostic (and therapeutic) outcomes. The same group introduced a multilayer perceptron network (RapidVol) to speed up slice-to-volume ultrasound reconstruction following a tri-planar decomposition of original 3D brain volumes and were able to demonstrate a threefold quicker and 46% more accurate complete 3D reconstruction of the fetal brain (collected as part of the INTERGROWTH-21st study) compared to the aforementioned implicit approach [96].
Lin et al. developed a real-time artificial intelligence-aided image recognition system based on the YOLO (You Only Look Once) algorithm (prenatal ultrasound diagnosis artificial intelligence conduct system; PAICS) which was capable of detecting a set of fetal intracranial malformations. The algorithm was trained on 44,000 images and 169 videos and achieved an excellent performance upon both internal and external validation, with an accuracy comparable to that of expert sonologists [73]. The same group conducted a randomized control trial that assessed the efficacy of a deep learning system (PAICS) in assisting in fetal intracranial malformation detection. More than 700 images/videos were interactively assessed by 36 operators with different levels of expertise. With the use of PAICS (prior or after individual interpretation) an increase in the detection rates of fetal intracranial malformations from neurosonographic data could be noticed [77].

3.3. Other Current AI Applications Related to Fetal Neurosonography

The recent and rapidly emerging subfield of AI that concerns the interaction between computers and human language is known as natural language processing (NLP). The launch of the chatbot ChatGPT-3, a large language model (LLM), in 2022, which is based on an NLP model known as the Generative Pretrained Transformer (GPT), has generated a wide range of possible applications for AI in healthcare [97,98,99]. Therefore, beyond the field of (often cited) scientific writing, identifying suitable areas of its application in obstetrics and gynecology, including fetal neurosonography, is obvious. It is crucial to understand that ChatGPT, which is trained on massive amounts of text data, mimics statistical patterns of human language and generates outputs based on probabilities, thus emulating the dynamics of human conversation [97,98,100,101]. Before addressing the potential applications of ChatGPT in the context of fetal neurosonography, two fundamental, capability-limiting aspects must be kept in mind and must never be ignored in the interpretation of the subsequent discussion: Although ChatGPT should increasingly become capable of generating meaning-semblant behavior, its technology currently lacks semantic understanding [102]. Be aware of hallucinations and fabricated facts [98]. Furthermore, its generated content suffers from an absence of verifiable references [99,100,101].
Most recently, the latest version of ChatGPT (GPT-4) has been evaluated for its ability to facilitate referrals for fetal echocardiography to improve the early detection of and outcomes related to congenital heart defects (CHDs) [103]. Kopylov et al. found moderate agreement between ChatGPT and experts. Comparing AI referrals to experts indicated an agreement of around 80% (p < 0.001). For minor CHD cases, the AI referral rate was 65% compared to 47% for experts. In future, AI could presumably support clinicians in this area.
A similar approach would be conceivable for fetal neurosonography, as would the implementation of language-based AI to support in summarizing the findings and optimizing the wording of complex medical reports, or even in the classification of various sonographic CNS abnormalities into corresponding disease entities with differential diagnoses. In summary, ChatGPT cannot be used independently of experts in the field of fetal neurosonography and certainly will not replace them [100,104]. We agree with other authors that it is unlikely that ChatGPT, even in improved versions, will ever be able to provide reliable data at the standard required by evidence-based medicine [100,105]. However, repetitive, time-consuming tasks and conclusions made in clinical routine will soon be left to this chatbot.

4. Perspectives

AI-based applications, on whose algorithms prenatal diagnostics will increasingly depend, are fundamentally changing the way clinicians use US. Even if the development of AI-based applications in obstetric US is still in its infancy and automation has not yet reached the required level of clinical application, sometime soon the use of AI in fetal neurosonography will exceed the capabilities of human experts, as in other fields of fetal US [11,14].
A recently published paper introduced a novel approach that parameterizes 3D volume data using a deep neural network, which jointly refines the 2D-to-3D registrations and generates a full 3D reconstruction based on only a set of non-sensor-tracked freehand 2D scans [82].
Unfortunately, the black box nature of most machine learning models remains unresolved, and many decisions intelligent systems make still lack interpretable explanations. Explainable AI (XAI) is considered to provide methods, equations, and tools that make the results generated by an AI algorithm comprehensible for the user. By providing visual and feature-based explanations, XAI enhances the transparency and trustworthiness of AI predictions and could thus pave the way for the initial uptake of an AI model into clinical routine [106]. In this regard, a recent study analyzed the performance of several CNNs trained on 12,400 images of fetal (CNS) plane detection, after input (image) enhancement, by adopting a Histogram Equalization and Fuzzy Logic-based contrast enhancement. The results achieved an accuracy between 83.4 and 90% (depending on the classifier analyzed) and were subsequently evaluated by applying the LIME (Local Interpretable Model-Agnostic Explanations) and GradCAM (Gradient-weighted Class Activation Mapping) algorithms to examine the decision-making process of the classifiers, providing explainability for their outputs [107]. These XAI models visually depict the region of the image contributing to a particular class, thereby justifying why the model predicted that class [108]. Very recently, Pegios and co-workers used iterative counterfactual explanations to generate realistic high-quality CNS standard planes from low-quality non-standard ones. Using their experimental approach (Diff-ICE), they demonstrated its superior value in the challenging task of fetal ultrasound quality assessments, as well as its potential for future applications [109]. To alleviate the risks of incomprehensibility and—more crucially—clinical irrelevance in forthcoming research, two publications proposed directive guidelines for transparent ML systems in medical image analyses (INTRPRT/Clinical XAI Guidelines) [110,111]. Interestingly, all sixteen commonly used heatmap XAI techniques evaluated by Jin et al. were found to be insufficient for clinical use due to their failure in the criteria of ‘truthfulness’ and ‘plausibility’ [111].
Acknowledging the recent achievements of AI in medical image analyses, Sendra-Balcells and co-workers addressed the paradox that is that the development of AI in rural areas in the world, like in Sub-Saharan Africa, is at its lowest level, while, on the other hand, current AI advancements include deep learning implementations in prenatal US diagnoses, which can facilitate improved antenatal screening. In this regard, they investigated the generalizability of fetal US deep learning models to low-resource imaging settings [112]. The authors pre-trained a DL framework for standard plane detection (e.g., the fetal brain) in centers with greater access to large clinical imaging datasets and subsequently applied this model to African settings. The results gained from transfer learning exemplify that domain adaptation might be a solution that supports prenatal care in low-income countries.
As a recent commentary given by Tonni and Grisolia correctly stated, we will inevitably have to face that the incorporation of AI solutions into the US apparatus will start to surge exponentially in the near future, producing beneficial effects not only in terms of diagnostic accuracy but also in the quality of fetal examination in its entirety, including the appropriate surveying of complex anatomical structures (e.g., the fetal CNS and heart), the reporting of these exams, and improving medical–legal issues for physicians involved in both fetal imaging and fetomaternal care [113].

5. Discussion

Our focusing review provides insights into both the current research topics and clinical applications of AI-based algorithms related to the field of fetal neurosonography and sheds light on how recent advancements in AI, and particularly cutting-edge technologies like GANs, segmentation-based approaches, XAI tools, and others, could further enhance the US image analysis of the fetal CNS. The strength of our review is the exclusive inclusion of publications addressing the state of the art of AI-driven methods for the US assessment of the fetal brain to enable clinicians to contextualize these applications in their clinical workup, illustrate potential pitfalls, and outline future avenues of fetal neurosonography to pursue. In fact, AI-driven models showcased how the accuracy, workflow efficiency, and interpretability of US imaging can be improved, which in turn might contribute to an earlier and more precise detection of fetal brain anomalies in utero. Prospectively, DL frameworks could be trained to detect structural abnormalities of the fetal brain, to label the type of malformation observed in diagnostic standard planes, and to generate alerts to prompt prenatal diagnoses. While 2D US remains the primary diagnostic tool for fetal neurosonography and (sequences of) 2D cross-sectional views of inherently 3D neuroanatomic structures are used to train AI algorithms, one must acknowledge a considerable loss of conceptional image information. The image data retrieved from 3D US with multiplanar reconstruction can complement conventional 2D US and overcome the limitations of the latter. Due to the well-described barriers (e.g., a lack of familiarity with volume postprocessing and skilled manual navigation) to the routine use of 3D US in prenatal diagnoses, recent advances in 3D imaging have focused on the implementation of intelligent algorithms for the automated extraction of data from 3D volume datasets. Several publications demonstrated the superior value of AI tools in facilitating a rapid, easier, and less operator-dependent 3D volume analysis of fetal CNS anatomy. Alternatively, it has been shown that 3D volumes can be effectively constructed from 2D scans by applying ML/DL approaches. The inherent advantages of DL-based slice-to-volume (or 2D/3D) registration techniques comprise a fully automated alignment and transfer of spatial information between subjects and imaging modalities and the ability to correct for motion and misaligned slices when reconstructing the volume of a certain modality. In this regard, an interesting approach for the fetal 6D pose estimation of cutting planes (relative to the fetal brain center) or the recently released normative brain atlas, which apply comparable AI pipelines to enhance the visibility of the fetal CNS in 3D US images, must be mentioned, as these underscore the tremendous educational potential of these algorithms. Our capability to make AI-augmented assessments of fetal brain maturation also allows for GA estimations using CNS image data with high accuracy, exemplifying its clinical value in low-resource obstetrical settings.
However, there are several limitations to this review article. The selection of the included studies was based on the authors’ subjective assessment of the methodology, diagnostic relevance, and potential of the innovative AI algorithms described therein to be integrated into (future) clinical workflows. Although we were able to address the advantages of particular promising AI approaches and their added value, an in-depth comparison was not possible due to the heterogeneity of these models and would go beyond the scope of this review.

6. Conclusions

AI has increasingly been accepted as a fundamental component of a multitude of healthcare applications, such as medical image analyses. In light of this inevitable and intriguing flooding of intelligent algorithms into modern US diagnostics, nothing less than the beginning of a new era of 5D ultrasound has been proclaimed. However, there are several challenges to AI’s deployment, particularly in fetal neurosonography, that must be solved: the need for large and diverse training datasets (2D/3D) in general; the difficulty of training accurate models for diagnosing evolving fetal brain abnormalities; the potential for algorithmic biases; the urgent need to address the troubling lack of transparency and interpretability of current AI algorithms to achieve their further translation into clinical diagnostic circuits and to avoid a reluctance to use AI models which seemingly only demonstrate a benefit on the optimal patient; and the seamless integration of AI models into diagnostic workflows, which requires careful consideration of ethical and legal implications, as well as the need for rigorous validation studies to ensure the safety and efficacy of AI applications.
However, it remains to be seen how fast and in what manner promising techniques like 6D fetal pose estimation, slice-to-volume registration tools, and the real-time recognition of normal and abnormal CNS anatomy, to name a few, will be integrated into clinical practice and medical education, alongside the continued advancement of the current, already commercialized AI frameworks.

Author Contributions

Conceptualization, J.W. and J.L.S.; methodology, J.W.; investigation, J.W. and J.L.S.; writing—original draft preparation, J.W.; writing—review and editing, J.W. and J.L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The authors are willing to provide additional information about their research. For more information, please contact the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Morris, J.K.; Wellesley, D.G.; Barisic, I.; Addor, M.-C.; Bergman, J.E.H.; Braz, P.; Cavero-Carbonell, C.; Draper, E.S.; Gatt, M.; Haeusler, M.; et al. Epidemiology of congenital cerebral anomalies in Europe: A multicentre, population-based EUROCAT study. Arch. Dis. Child. 2019, 104, 1181–1187. [Google Scholar] [CrossRef] [PubMed]
  2. Tagliabue, G.; Tessandori, R.; Caramaschi, F.; Fabiano, S.; Maghini, A.; Tittarelli, A.; Vergani, D.; Bellotti, M.; Pisani, S.; Gambino, M.L.; et al. Descriptive epidemiology of selected birth defects, areas of Lombardy, Italy, 1999. Popul. Health Metr. 2007, 5, 4. [Google Scholar] [CrossRef]
  3. Paladini, D.; Malinger, G.; Birnbaum, R.; Monteagudo, A.; Pilu, G.; Salomon, L.J.; Timor-Tritsch, I.E. ISUOG Practice Guidelines (updated): Sonographic examination of the fetal central nervous system. Part 2: Performance of targeted neurosonography. Ultrasound Obstet. Gynecol. 2021, 57, 661–671. [Google Scholar] [CrossRef] [PubMed]
  4. Yagel, S.; Valsky, D.V. Re: ISUOG Practice Guidelines (updated): Sonographic examination of the fetal central nervous system. Part 1: Performance of screening examination and indications for targeted neurosonography. Ultrasound Obstet. Gynecol. 2021, 57, 173–174. [Google Scholar] [CrossRef] [PubMed]
  5. Snoek, R.; Albers, M.E.W.A.; Mulder, E.J.H.; Lichtenbelt, K.D.; de Vries, L.S.; Nikkels, P.G.J.; Cuppen, I.; Pistorius, L.R.; Manten, G.T.R.; de Heus, R. Accuracy of diagnosis and counseling of fetal brain anomalies prior to 24 weeks of gestational age. J. Matern. Neonatal Med. 2018, 31, 2188–2194. [Google Scholar] [CrossRef] [PubMed]
  6. Group, E.W. Role of prenatal magnetic resonance imaging in fetuses with isolated mild or moderate ventriculomegaly in the era of neurosonography: International multicenter study. Ultrasound Obstet. Gynecol. 2020, 56, 340–347. [Google Scholar] [CrossRef]
  7. Dall’asta, A.; Paramasivam, G.; Basheer, S.N.; Whitby, E.; Tahir, Z.; Lees, C. How to obtain diagnostic planes of the fetal central nervous system using three-dimensional ultrasound and a context-preserving rendering technology. Am. J. Obstet. Gynecol. 2019, 220, 215–229. [Google Scholar] [CrossRef]
  8. Di Mascio, D.; Buca, D.; Rizzo, G.; Khalil, A.; Timor-Tritsch, I.E.; Odibo, A.; Mappa, I.; Flacco, M.E.; Giancotti, A.; Liberati, M.; et al. Methodological Quality of Fetal Brain Structure Charts for Screening Examination and Targeted Neurosonography: A Systematic Review. Fetal Diagn. Ther. 2022, 49, 145–158. [Google Scholar] [CrossRef]
  9. Boutet, M.L.; Eixarch, E.; Ahumada-Droguett, P.; Nakaki, A.; Crovetto, F.; Cívico, M.S.; Borrás, A.; Manau, D.; Gratacós, E.; Crispi, F.; et al. Fetal neurosonography and infant neurobehavior following conception by assisted reproductive technology with fresh or frozen embryo transfer. Ultrasound Obstet. Gynecol. 2022, 60, 646–656. [Google Scholar] [CrossRef]
  10. Bastiaansen, W.A.; Klein, S.; Koning, A.H.; Niessen, W.J.; Steegers-Theunissen, R.P.; Rousian, M. Computational methods for the analysis of early-pregnancy brain ultrasonography: A systematic review. EBioMedicine 2023, 89, 104466. [Google Scholar] [CrossRef]
  11. Horgan, R.; Nehme, L.; Abuhamad, A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat. Diagn. 2023, 43, 1176–1219. [Google Scholar] [CrossRef] [PubMed]
  12. Jost, E.; Kosian, P.; Cruz, J.J.; Albarqouni, S.; Gembruch, U.; Strizek, B.; Recker, F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J. Clin. Med. 2023, 12, 6833. [Google Scholar] [CrossRef] [PubMed]
  13. Torres, H.R.; Morais, P.; Oliveira, B.; Birdir, C.; Rüdiger, M.; Fonseca, J.C.; Vilaça, J.L. A review of image processing methods for fetal head and brain analysis in ultrasound images. Comput. Methods Programs Biomed. 2022, 215, 106629. [Google Scholar] [CrossRef] [PubMed]
  14. Xiao, S.; Zhang, J.; Zhu, Y.; Zhang, Z.; Cao, H.; Xie, M.; Zhang, L. Application and Progress of Artificial Intelligence in Fetal Ultrasound. J. Clin. Med. 2023, 12, 3298. [Google Scholar] [CrossRef] [PubMed]
  15. Dhombres, F.; Bonnard, J.; Bailly, K.; Maurice, P.; Papageorghiou, A.T.; Jouannic, J.-M. Contributions of Artificial Intelligence Reported in Obstetrics and Gynecology Journals: Systematic Review. J. Med. Internet Res. 2022, 24, e35465. [Google Scholar] [CrossRef]
  16. Yousef, R.; Gupta, G.; Yousef, N.; Khari, M. A holistic overview of deep learning approach in medical imaging. Multimed. Syst. 2022, 28, 881–914. [Google Scholar] [CrossRef]
  17. Drukker, L.; Noble, J.A.; Papageorghiou, A.T. Introduction to artificial intelligence in ultrasound imaging in obstetrics and gynecology. Ultrasound Obstet. Gynecol. 2020, 56, 498–505. [Google Scholar] [CrossRef]
  18. Fiorentino, M.C.; Villani, F.P.; Di Cosmo, M.; Frontoni, E.; Moccia, S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med. Image Anal. 2023, 83, 102629. [Google Scholar] [CrossRef]
  19. Zegarra, R.R.; Ghi, T. Use of artificial intelligence and deep learning in fetal ultrasound imaging. Ultrasound Obstet. Gynecol. 2023, 62, 185–194. [Google Scholar] [CrossRef]
  20. Ghabri, H.; Alqahtani, M.S.; Ben Othman, S.; Al-Rasheed, A.; Abbas, M.; Almubarak, H.A.; Sakli, H.; Abdelkarim, M.N. Transfer learning for accurate fetal organ classification from ultrasound images: A potential tool for maternal healthcare providers. Sci. Rep. 2023, 13, 17904. [Google Scholar] [CrossRef]
  21. Box, G.E.P. Robustness in the Strategy of Scientific Model Building. In Robustness in Statistics; Launer, R.L., Wilkinson, G.N., Eds.; Academic Press: Cambridge, MA, USA, 1979; pp. 201–236. [Google Scholar]
  22. Yeo, L.; Romero, R. New and advanced features of fetal intelligent navigation echocardiography (FINE) or 5D heart. J. Matern. Neonatal Med. 2022, 35, 1498–1516. [Google Scholar] [CrossRef] [PubMed]
  23. Namburete, A.I.; Papież, B.W.; Fernandes, M.; Wyburd, M.K.; Hesse, L.S.; Moser, F.A.; Ismail, L.C.; Gunier, R.B.; Squier, W.; Ohuma, E.O.; et al. Normative spatiotemporal fetal brain maturation with satisfactory development at 2 years. Nature 2023, 623, 106–114. [Google Scholar] [CrossRef] [PubMed]
  24. Moser, F.; Huang, R.; Papież, B.W.; Namburete, A.I.L. BEAN: Brain Extraction and Alignment Network for 3D Fetal Neuro sonography. Neuroimage 2022, 258, 119341. [Google Scholar] [CrossRef] [PubMed]
  25. Namburete, A.I.; Xie, W.; Yaqub, M.; Zisserman, A.; Noble, J.A. Fully-automated alignment of 3D fetal brain ultrasound to a canonical reference space using multi-task learning. Med. Image Anal. 2018, 46, 1–14. [Google Scholar] [CrossRef] [PubMed]
  26. Moser, F.; Huang, R.; Papageorghiou, A.T.; Papież, B.W.; Namburete, A. Automated Fetal Brain Extraction from Clinical Ultrasound Volumes Using 3d Convolutional Neural Networks; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
  27. Gholipour, A.; Rollins, C.K.; Velasco-Annis, C.; Ouaalam, A.; Akhondi-Asl, A.; Afacan, O.; Ortinau, C.M.; Clancy, S.; Limperopoulos, C.; Yang, E.; et al. A normative spatiotemporal MRI atlas of the fetal brain for automatic segmentation and analysis of early brain growth. Sci. Rep. 2017, 7, 476. [Google Scholar] [CrossRef]
  28. Gembicki, M.; Welp, A.; Scharf, J.L.; Dracopoulos, C.; Weichert, J. A Clinical Approach to Semiautomated Three-Dimensional Fetal Brain Biometry-Comparing the Strengths and Weaknesses of Two Diagnostic Tools: 5DCNS+(TM) and SonoCNS(TM). J. Clin. Med. 2023, 12, 5334. [Google Scholar] [CrossRef]
  29. Welp, A.; Gembicki, M.; Dracopoulos, C.; Scharf, J.L.; Rody, A.; Weichert, J. Applicability of a semiautomated volumetric approach (5D CNS+™) for detailed antenatal reconstruction of abnormal fetal CNS anatomy. BMC Med. Imaging 2022, 22, 154. [Google Scholar] [CrossRef]
  30. Welp, A.; Gembicki, M.; Rody, A.; Weichert, J. Validation of a semiautomated volumetric approach for fetal neurosonography using 5DCNS+ in clinical data from >1100 consecutive pregnancies. Child’s Nerv. Syst. 2020, 36, 2989–2995. [Google Scholar] [CrossRef]
  31. Lu, J.L.A.; Resta, S.; Marra, M.C.; Patelli, C.; Stefanachi, V.; Rizzo, G. Validation of an automatic software in assessing fetal brain volume from three dimensional ultrasonographic volumes: Comparison with manual analysis. J. Clin. Ultrasound 2023, 51, 1146–1151. [Google Scholar] [CrossRef]
  32. Alzubaidi, M.; Agus, M.; Alyafei, K.; Althelaya, K.A.; Shah, U.; Abd-Alrazaq, A.; Anbar, M.; Makhlouf, M.; Househ, M. Toward deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via ultrasound images. iScience 2022, 25, 104713. [Google Scholar] [CrossRef]
  33. TTang, X. The role of artificial intelligence in medical imaging research. BJR Open 2020, 2, 20190031. [Google Scholar] [CrossRef] [PubMed]
  34. Rizzo, G.; Aiello, E.; Pietrolucci, M.E.; Arduini, D. The feasibility of using 5D CNS software in obtaining standard fetal head measurements from volumes acquired by three-dimensional ultrasonography: Comparison with two-dimensional ultrasound. J. Matern. Neonatal Med. 2016, 29, 2217–2222. [Google Scholar] [CrossRef] [PubMed]
  35. Rizzo, G.; Capponi, A.; Persico, N.; Ghi, t.; Nazzaro, g.; Boito, s.; Pietrolucci, M.E.; Arduini, D. 5D CNS+ Software for Automatically Imaging Axial, Sagittal, and Coronal Planes of Normal and Abnormal Second-Trimester Fetal Brains. J. Ultrasound Med. 2016, 35, 2263–2272. [Google Scholar] [CrossRef] [PubMed]
  36. Grandjean, G.A.; Hossu, G.; Bertholdt, C.; Noble, P.; Morel, O.; Grangé, G. Artificial intelligence assistance for fetal head biometry: Assessment of automated measurement software. Diagn. Interv. Imaging 2018, 99, 709–716. [Google Scholar] [CrossRef]
  37. Pluym, I.D.; Afshar, Y.; Holliman, K.; Kwan, L.; Bolagani, A.; Mok, T.; Silver, B.; Ramirez, E.; Han, C.S.; Platt, L.D. Accuracy of automated three-dimensional ultrasound imaging technique for fetal head biometry. Ultrasound Obstet. Gynecol. 2021, 57, 798–803. [Google Scholar] [CrossRef]
  38. Han, X.; Yu, J.; Yang, X.; Chen, C.; Zhou, H.; Qiu, C.; Cao, Y.; Zhang, T.; Peng, M.; Zhu, G.; et al. Artificial intelligence assistance for fetal development: Evaluation of an automated software for biometry measurements in the mid-trimester. BMC Pregnancy Childbirth 2024, 24, 158. [Google Scholar] [CrossRef]
  39. Yaqub, M.; Napolitano, R.; Ioannou, C.; Papageorghiou, A.T.; Noble, J. Automatic detection of local fetal brain structures in ultrasound images. In Proceedings of the International Symposium on Biomedical Imaging, Barcelona, Spain, 2–5 May 2012; pp. 1555–1558. [Google Scholar]
  40. Cuingnet, R.; Somphone, O.; Mory, B.; Prevost, R.; Yaqub, M.; Napolitano, R.; Papageorghiou, A.; Roundhill, D.; Noble, J.A.; Ardon, R. Where is my baby? A fast fetal head auto-alignment in 3D-ultrasound. In Proceedings of the 2013 IEEE 10th International Symposium on Biomedical Imaging, San Francisco, CA, USA, 7–11 April 2013. [Google Scholar]
  41. Sofka, M.; Zhang, J.; Good, S.; Zhou, S.K.; Comaniciu, D. Automatic detection and measurement of structures in fetal head ultrasound volumes using sequential estimation and Integrated Detection Network (IDN). IEEE Trans. Med. Imaging 2014, 33, 1054–1070. [Google Scholar] [CrossRef]
  42. Namburete, A.I.; Stebbing, R.V.; Kemp, B.; Yaqub, M.; Papageorghiou, A.T.; Noble, J.A. Learning-based prediction of gestational age from ultrasound images of the fetal brain. Med. Image Anal. 2015, 21, 72–86. [Google Scholar] [CrossRef]
  43. Yaqub, M.; Kelly, B.; Papageorghiou, A.T.; Noble, J.A. Guided Random Forests for Identification of Key Fetal Anatomy and Image Categorization in Ultrasound Scans. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015. [Google Scholar]
  44. Baumgartner, C.F.; Kamnitsas, K.; Matthew, J.; Smith, S.; Kainz, B.; Rueckert, D. Real-Time Standard Scan Plane Detection and Localisation in Fetal Ultrasound Using Fully Convolutional Neural Networks. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI, Athens, Greece, 17–21 October 2016; Springer: Cham, Switzerland, 2016. [Google Scholar]
  45. Sridar, P.; Kumar, A.; Quinton, A.; Nanan, R.; Kim, J.; Krishnakumar, R. Decision Fusion-Based Fetal Ultrasound Image Plane Classification Using Convolutional Neural Networks. Ultrasound Med. Biol. 2019, 45, 1259–1273. [Google Scholar] [CrossRef]
  46. Yaqub, M.; Kelly, B.; Papageorghiou, A.T.; Noble, J.A. A Deep Learning Solution for Automatic Fetal Neurosonographic Diagnostic Plane Verification Using Clinical Standard Constraints. Ultrasound Med. Biol. 2017, 43, 2925–2933. [Google Scholar] [CrossRef]
  47. Qu, R.; Xu, G.; Ding, C.; Jia, W.; Sun, M. Deep Learning-Based Methodology for Recognition of Fetal Brain Standard Scan Planes in 2D Ultrasound Images. IEEE Access 2020, 8, 44443–44451. [Google Scholar] [CrossRef]
  48. Huang, R.; Xie, W.; Alison Noble, J. VP-Nets: Efficient automatic localization of key brain structures in 3D fetal neurosonography. Med. Image Anal. 2018, 47, 127–139. [Google Scholar] [CrossRef] [PubMed]
  49. Huang, R.; Namburete, A.; Noble, A. Learning to segment key clinical anatomical structures in fetal neurosonography informed by a region-based descriptor. J. Med. Imaging 2018, 5, 014007. [Google Scholar] [CrossRef] [PubMed]
  50. van den Heuvel, T.L.A.; de Bruijn, D.; de Korte, C.L.; Ginneken, B.V. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS ONE 2018, 13, e0200412. [Google Scholar] [CrossRef]
  51. Dou, H.; Yang, X.; Qian, J.; Xue, W.; Qin, H.; Wang, X.; Yu, L.; Wang, S.; Xiong, Y.; Heng, P.A.; et al. Agent with Warm Start and Active Termination for Plane Localization in 3D Ultrasound. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI, Shenzhen, China, 13–17 October 2019; Springer: Cham, Switzerland, 2019. [Google Scholar]
  52. Sahli, H.; Mouelhi, A.; Ben Slama, A.; Sayadi, M.; Rachdi, R. Supervised classification approach of biometric measures for automatic fetal defect screening in head ultrasound images. J. Med. Eng. Technol. 2019, 43, 279–286. [Google Scholar] [CrossRef]
  53. Alansary, A.; Oktay, O.; Li, Y.; Le Folgoc, L.; Hou, B.; Vaillant, G.; Kamnitsas, K.; Vlontzos, A.; Glocker, B.; Kainz, B.; et al. Evaluating reinforcement learning agents for anatomical landmark detection. Med. Image Anal. 2019, 53, 156–164. [Google Scholar] [CrossRef]
  54. Lin, Z.; Li, S.; Ni, D.; Liao, Y.; Wen, H.; Du, J.; Chen, S.; Wang, T.; Lei, B. Multi-task learning for quality assessment of fetal head ultrasound images. Med. Image Anal. 2019, 58, 101548. [Google Scholar] [CrossRef]
  55. Bastiaansen, W.A.P.; Rousian, M.; Steegers-Theunissen, R.P.M.; Niessen, W.J.; Koning, A.; Klein, S. Towards Segmentation and Spatial Alignment of the Human Embryonic Brain Using Deep Learning for Atlas-Based Registration. In Proceedings of the Biomedical Image Registration, Portorož, Slovenia, 1–2 December 2020; Springer: Cham, Switzerland, 2020. [Google Scholar]
  56. Xu, Y.; Lee, L.H.; Drukker, L.; Yaqub, M.; Papageorghiou, A.T.; Noble, A.J. Simulating realistic fetal neurosonography images with appearance and growth change using cycle-consistent adversarial networks and an evaluation. J. Med. Imaging 2020, 7, 057001. [Google Scholar] [CrossRef]
  57. Ramos, R.; Olveres, J.; Escalante-Ramírez, B.; Arámbula Cosío, F. Deep Learning Approach for Cerebellum Localization in Prenatal Ultrasound Images; SPIE: Bellingham, WA, USA, 2020; Volume 11353. [Google Scholar]
  58. Maraci, M.A.; Yaqub, M.; Craik, R.; Beriwal, S.; Self, A.; von Dadelszen, P.; Papageorghiou, A.; Noble, J.A. Toward point-of-care ultrasound estimation of fetal gestational age from the trans-cerebellar diameter using CNN-based ultrasound image analysis. J. Med. Imaging 2020, 7, 014501. [Google Scholar] [CrossRef]
  59. Chen, X.; He, M.; Dan, T.; Wang, N.; Lin, M.; Zhang, L.; Xian, J.; Cai, H.; Xie, H. Automatic Measurements of Fetal Lateral Ventricles in 2D Ultrasound Images Using Deep Learning. Front. Neurol. 2020, 11, 526. [Google Scholar] [CrossRef]
  60. Xie, B.; Lei, T.; Wang, N.; Cai, H.; Xian, J.; He, M.; Zhang, L.; Xie, H. Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1303–1312. [Google Scholar] [CrossRef] [PubMed]
  61. Xie, H.N.; Wang, N.; He, M.; Zhang, L.H.; Cai, H.M.; Xian, J.B.; Lin, M.F.; Zheng, J.; Yang, Y.Z. Using deep-learning algorithms to classify fetal brain ultrasound images as normal or abnormal. Ultrasound Obstet. Gynecol. 2020, 56, 579–587. [Google Scholar] [CrossRef] [PubMed]
  62. Zeng, Y.; Tsui, P.H.; Wu, W.; Zhou, Z.; Wu, S. Fetal Ultrasound Image Segmentation for Automatic Head Circumference Biometry Using Deeply Supervised Attention-Gated V-Net. J. Digit. Imaging 2021, 34, 134–148. [Google Scholar] [CrossRef] [PubMed]
  63. Burgos-Artizzu, X.P.; Coronado-Gutiérrez, D.; Valenzuela-Alcaraz, B.; Vellvé, K.; Eixarch, E.; Crispi, F.; Bonet-Carne, E.; Bennasar, M.; Gratacos, E. Analysis of maturation features in fetal brain ultrasound via artificial intelligence for the estimation of gestational age. Am. J. Obstet. Gynecol. MFM 2021, 3, 100462. [Google Scholar] [CrossRef] [PubMed]
  64. Gofer, S.; Haik, O.; Bardin, R.; Gilboa, Y.; Perlman, S. Machine Learning Algorithms for Classification of First-Trimester Fetal Brain Ultrasound Images. J. Ultrasound Med. Off. J. Am. Inst. Ultrasound Med. 2022, 41, 1773–1779. [Google Scholar] [CrossRef]
  65. Skelton, E.; Matthew, J.; Li, Y.; Khanal, B.; Martinez, J.C.; Toussaint, N.; Gupta, C.; Knight, C.; Kainz, B.; Hajnal, J.; et al. Towards automated extraction of 2D standard fetal head planes from 3D ultrasound acquisitions: A clinical evaluation and quality assessment comparison. Radiography 2021, 27, 519–526. [Google Scholar] [CrossRef]
  66. Yeung, P.-H.; Aliasi, M.; Papageorghiou, A.T.; Haak, M.; Xie, W.; Namburete, A.I. Learning to map 2D ultrasound images into 3D space with minimal human annotation. Med. Image Anal. 2021, 70, 101998. [Google Scholar] [CrossRef]
  67. Montero, A.; Bonet-Carne, E.; Burgos-Artizzu, X.P. Generative Adversarial Networks to Improve Fetal Brain Fine-Grained Plane Classification. Sensors 2021, 21, 7975. [Google Scholar] [CrossRef]
  68. Moccia, S.; Fiorentino, M.C.; Frontoni, E. Mask-R2CNN: A distance-field regression version of Mask-RCNN for fetal-head delineation in ultrasound images. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1711–1718. [Google Scholar] [CrossRef]
  69. Wyburd, M.K.; Hesse, L.S.; Aliasi, M.; Jenkinson, M.; Papageorghiou, A.T.; Haak, M.C.; Namburete, A.I. Assessment of Regional Cortical Development Through Fissure Based Gestational Age Estimation in 3D Fetal Ultrasound. In Proceedings of the Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis, Strasbourg, France, 1 October 2021; Springer: Cham, Switzerland, 2021. [Google Scholar]
  70. Shu, X.; Chang, F.; Zhang, X.; Shao, C.; Yang, X. ECAU-Net: Efficient channel attention U-Net for fetal ultrasound cerebellum segmentation. Biomed. Signal Process. Control. 2022, 75, 103528. [Google Scholar] [CrossRef]
  71. Hesse, L.S.; Aliasi, M.; Moser, F.; INTERGROWTH-21(st) Consortium; Haak, M.C.; Xie, W.; Jenkinson, M.; Namburete, A.I. Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning. NeuroImage 2022, 254, 119117. [Google Scholar] [CrossRef]
  72. Di Vece, C.; Dromey, B.; Vasconcelos, F.; David, A.L.; Peebles, D.; Stoyanov, D. Deep learning-based plane pose regression in obstetric ultrasound. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 833–839. [Google Scholar] [CrossRef] [PubMed]
  73. Lin, M.; He, X.; Guo, H.; He, M.; Zhang, L.; Xian, J.; Lei, T.; Xu, Q.; Zheng, J.; Feng, J.; et al. Use of real-time artificial intelligence in detection of abnormal image patterns in standard sonographic reference planes in screening for fetal intracranial malformations. Ultrasound Obstet. Gynecol. 2022, 59, 304–316. [Google Scholar] [CrossRef] [PubMed]
  74. Sreelakshmy, R.; Titus, A.; Sasirekha, N.; Logashanmugam, E.; Begam, R.B.; Ramkumar, G.; Raju, R. An Automated Deep Learning Model for the Cerebellum Segmentation from Fetal Brain Images. BioMed Res. Int. 2022, 2022, 8342767. [Google Scholar] [CrossRef] [PubMed]
  75. Alzubaidi, M.; Agus, M.; Shah, U.; Makhlouf, M.; Alyafei, K.; Househ, M. Ensemble Transfer Learning for Fetal Head Analysis: From Segmentation to Gestational Age and Weight Prediction. Diagnostics 2022, 12, 2229. [Google Scholar] [CrossRef]
  76. Coronado-Gutiérrez, D.; Eixarch, E.; Monterde, E.; Matas, I.; Traversi, P.; Gratacós, E.; Bonet-Carne, E.; Burgos-Artizzu, X.P. Automatic Deep Learning-Based Pipeline for Automatic Delineation and Measurement of Fetal Brain Structures in Routine Mid-Trimester Ultrasound Images. Fetal Diagn. Ther. 2023, 50, 480–490. [Google Scholar] [CrossRef]
  77. Lin, M.; Zhou, Q.; Lei, T.; Shang, N.; Zheng, Q.; He, X.; Wang, N.; Xie, H. Deep learning system improved detection efficacy of fetal intracranial malformations in a randomized controlled trial. NPJ Digit. Med. 2023, 6, 191. [Google Scholar] [CrossRef]
  78. Rauf, F.; Khan, M.A.; Bashir, A.K.; Jabeen, K.; Hamza, A.; Alzahrani, A.I.; Alalwan, N.; Masood, A. Automated deep bottleneck residual 82-layered architecture with Bayesian optimization for the classification of brain and common maternal fetal ultrasound planes. Front. Med. 2023, 10, 1330218. [Google Scholar] [CrossRef]
  79. Alzubaidi, M.; Agus, M.; Makhlouf, M.; Anver, F.; Alyafei, K.; Househ, M. Large-scale annotation dataset for fetal head biometry in ultrasound images. Data Brief 2023, 51, 109708. [Google Scholar] [CrossRef]
  80. Alzubaidi, M.; Shah, U.; Agus, M.; Househ, M. FetSAM: Advanced Segmentation Techniques for Fetal Head Biometrics in Ultrasound Imagery. IEEE Open J. Eng. Med. Biol. 2024, 5, 281–295. [Google Scholar] [CrossRef]
  81. Di Vece, C.; Cirigliano, A.; Le Lous, M.; Napolitano, R.; David, A.L.; Peebles, D.; Jannin, P.; Vasconcelos, F.; Stoyanov, D. Measuring proximity to standard planes during fetal brain ultrasound scanning. arXiv 2024, arXiv:2404.07124. [Google Scholar]
  82. Yeung, P.-H.; Hesse, L.S.; Aliasi, M.; Haak, M.C.; Xie, W.; Namburete, A.I. Sensorless volumetric reconstruction of fetal brain freehand ultrasound scans with deep implicit representation. Med. Image Anal. 2024, 94, 103147. [Google Scholar] [CrossRef] [PubMed]
  83. Dubey, G.; Srivastava, S.; Jayswal, A.K.; Saraswat, M.; Singh, P.; Memoria, M. Fetal Ultrasound Segmentation and Measurements Using Appearance and Shape Prior Based Density Regression with Deep CNN and Robust Ellipse Fitting. J. Imaging Inform. Med. 2024, 37, 247–267. [Google Scholar] [CrossRef] [PubMed]
  84. Pokaprakarn, T.; Prieto, J.C.; Price, J.T.; Kasaro, M.P.; Sindano, N.; Shah, H.R.; Peterson, M.; Akapelwa, M.M.; Kapilya, F.M.; Sebastião, Y.V.; et al. AI Estimation of Gestational Age from Blind Ultrasound Sweeps in Low-Resource Settings. NEJM Evid. 2022, 1, EVIDoa2100058. [Google Scholar] [CrossRef] [PubMed]
  85. Lee, L.H.; Bradburn, E.; Craik, R.; Yaqub, M.; Norris, S.A.; Ismail, L.C.; Ohuma, E.O.; Barros, F.C.; Lambert, A.; Carvalho, M.; et al. Machine learning for accurate estimation of fetal gestational age based on ultrasound images. NPJ Digit. Med. 2023, 6, 36. [Google Scholar] [CrossRef]
  86. Lee, C.; Willis, A.; Chen, C.; Sieniek, M.; Watters, A.; Stetson, B.; Uddin, A.; Wong, J.; Pilgrim, R.; Chou, K.; et al. Development of a Machine Learning Model for Sonographic Assessment of Gestational Age. JAMA Netw. Open 2023, 6, e2248685. [Google Scholar] [CrossRef]
  87. Chen, C.; Yang, X.; Huang, Y.; Shi, W.; Cao, Y.; Luo, M.; Hu, X.; Zhu, L.; Yu, L.; Yue, K.; et al. FetusMapV2: Enhanced Fetal Pose Estimation in 3D Ultrasound. Medical Image Analysis. Med. Image Anal. 2024, 91, 103013. [Google Scholar] [CrossRef]
  88. Yeung, P.-H.; Aliasi, M.; Haak, M.; Xie, W.; Namburete, A.I.L. Adaptive 3D Localization of 2D Freehand Ultrasound Brain Images. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI, Singapore, 18–22 September 2022; Springer: Cham, Switzerland, 2022. [Google Scholar]
  89. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
  90. Wang, R.; Bashyam, V.; Yang, Z.; Yu, F.; Tassopoulou, V.; Chintapalli, S.S.; Skampardoni, I.; Sreepada, L.P.; Sahoo, D.; Nikita, K.; et al. Applications of generative adversarial networks in neuroimaging and clinical neuroscience. Neuroimage 2023, 269, 119898. [Google Scholar] [CrossRef]
  91. Lasala, A.; Fiorentino, M.C.; Micera, S.; Bandini, A.; Moccia, S. Exploiting class activation mappings as prior to generate fetal brain ultrasound images with GANs. In Proceedings of the 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 24–27 July 2023; Volume 2023, pp. 1–4. [Google Scholar]
  92. Wolterink, J.M.; Mukhopadhyay, A.; Leiner, T.; Vogl, T.J.; Bucher, A.M.; Išgum, I. Generative Adversarial Networks: A Primer for Radiologists. RadioGraphics 2021, 41, 840–857. [Google Scholar] [CrossRef]
  93. Lasala, A.; Fiorentino, M.C.; Bandini, A.; Moccia, S. FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis. Comput. Med. Imaging Graph. 2024, 116, 102405. [Google Scholar] [CrossRef]
  94. Iskandar, M.; Mannering, H.; Sun, Z.; Matthew, J.; Kerdegari, H.; Peralta, L.; Xochicale, M. Towards Realistic Ultrasound Fetal Brain Imaging Synthesis. arXiv 2023, arXiv:2304.03941. [Google Scholar]
  95. Zhang, L.; Zhang, J. Ultrasound image denoising using generative adversarial networks with residual dense connectivity and weighted joint loss. PeerJ Comput. Sci. 2022, 8, e873. [Google Scholar] [CrossRef]
  96. Eid, M.C.; Yeung, P.-H.; Wyburd, M.K.; Henriques, J.F.; Namburete, A.I.L. RapidVol: Rapid Reconstruction of 3D Ultrasound Volumes from Sensorless 2D Scans. arXiv 2024, arXiv:2404.10766. [Google Scholar]
  97. Altmäe, S.; Sola-Leyva, A.; Salumets, A. Artificial intelligence in scientific writing: A friend or a foe? Reprod. Biomed. Online 2023, 47, 3–9. [Google Scholar] [CrossRef] [PubMed]
  98. Bhayana, R. Chatbots and Large Language Models in Radiology: A Practical Primer for Clinical and Research Applications. Radiology 2024, 310, e232756. [Google Scholar] [CrossRef] [PubMed]
  99. Grünebaum, A.; Chervenak, J.; Pollet, S.L.; Katz, A.; Chervenak, F.A. The exciting potential for ChatGPT in obstetrics and gynecology. Am. J. Obstet. Gynecol. 2023, 228, 696–705. [Google Scholar] [CrossRef]
  100. Lee, Y.; Kim, S.Y. Potential applications of ChatGPT in obstetrics and gynecology in Korea: A review article. Obstet. Gynecol. Sci. 2024, 67, 153–159. [Google Scholar] [CrossRef]
  101. Youssef, A. Unleashing the AI revolution: Exploring the capabilities and challenges of large language models and text-to-image AI programs. Ultrasound Obstet. Gynecol. 2023, 62, 308–312. [Google Scholar] [CrossRef]
  102. Titus, L.M. Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategy. Cogn. Syst. Res. 2024, 83, 101174. [Google Scholar] [CrossRef]
  103. Kopylov, L.G.; Goldrat, I.; Maymon, R.; Svirsky, R.; Wiener, Y.; Klang, E. Utilizing ChatGPT to facilitate referrals for fetal echocardiography. Fetal Diagn. Ther. 2024. [Google Scholar] [CrossRef]
  104. Braun, E.-M.; Juhasz-Böss, I.; Solomayer, E.-F.; Truhn, D.; Keller, C.; Heinrich, V.; Braun, B.J. Will I soon be out of my job? Quality and guideline conformity of ChatGPT therapy suggestions to patient inquiries with gynecologic symptoms in a palliative setting. Arch. Gynecol. Obstet. 2024, 309, 1543–1549. [Google Scholar] [CrossRef] [PubMed]
  105. Haverkamp, W.; Tennenbaum, J.; Strodthoff, N. ChatGPT fails the test of evidence-based medicine. Eur. Hear. J. Digit. Health 2023, 4, 366–367. [Google Scholar] [CrossRef] [PubMed]
  106. Fischer, A.; Rietveld, A.; Teunissen, P.; Hoogendoorn, M.; Bakker, P. What is the future of artificial intelligence in obstetrics? A qualitative study among healthcare professionals. BMJ Open 2023, 13, e076017. [Google Scholar] [CrossRef] [PubMed]
  107. Rahman, R.; Alam, M.G.R.; Reza, M.T.; Huq, A.; Jeon, G.; Uddin, Z.; Hassan, M.M. Demystifying evidential Dempster Shafer-based CNN architecture for fetal plane detection from 2D ultrasound images leveraging fuzzy-contrast enhancement and explainable AI. Ultrasonics 2023, 132, 107017. [Google Scholar] [CrossRef] [PubMed]
  108. Harikumar, A.; Surendran, S.; Gargi, S. Explainable AI in Deep Learning Based Classification of Fetal Ultrasound Image Planes. Procedia Comput. Sci. 2024, 233, 1023–1033. [Google Scholar] [CrossRef]
  109. Pegios, P.; Lin, M.; Weng, N.; Svendsen, M.B.S.; Bashir, Z.; Bigdeli, S.; Christensen, A.N.; Tolsgaard, M.; Feragen, A. Diffusion-based Iterative Counterfactual Explanations for Fetal Ultrasound Image Quality Assessment. arXiv 2024, arXiv:2403.08700. [Google Scholar]
  110. Chen, H.; Gomez, C.; Huang, C.-M.; Unberath, M. Explainable medical imaging AI needs human-centered design: Guidelines and evidence from a systematic review. NPJ Digit. Med. 2022, 5, 156. [Google Scholar] [CrossRef]
  111. Jin, W.; Li, X.; Fatehi, M.; Hamarneh, G. Guidelines and evaluation of clinical explainable AI in medical image analysis. Med Image Anal. 2023, 84, 102684. [Google Scholar] [CrossRef]
  112. Sendra-Balcells, C.; Campello, V.M.; Torrents-Barrena, J.; Ahmed, Y.A.; Elattar, M.; Ohene-Botwe, B.; Nyangulu, P.; Stones, W.; Ammar, M.; Benamer, L.N.; et al. Generalisability of fetal ultrasound deep learning models to low-resource imaging settings in five African countries. Sci. Rep. 2023, 13, 2728. [Google Scholar]
  113. Tonni, G.; Grisolia, G. Simulator, machine learning, and artificial intelligence: Time has come to assist prenatal ultrasound diagnosis. J. Clin. Ultrasound 2023, 51, 1164–1165. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Weichert, J.; Scharf, J.L. Advancements in Artificial Intelligence for Fetal Neurosonography: A Comprehensive Review. J. Clin. Med. 2024, 13, 5626. https://doi.org/10.3390/jcm13185626

AMA Style

Weichert J, Scharf JL. Advancements in Artificial Intelligence for Fetal Neurosonography: A Comprehensive Review. Journal of Clinical Medicine. 2024; 13(18):5626. https://doi.org/10.3390/jcm13185626

Chicago/Turabian Style

Weichert, Jan, and Jann Lennard Scharf. 2024. "Advancements in Artificial Intelligence for Fetal Neurosonography: A Comprehensive Review" Journal of Clinical Medicine 13, no. 18: 5626. https://doi.org/10.3390/jcm13185626

APA Style

Weichert, J., & Scharf, J. L. (2024). Advancements in Artificial Intelligence for Fetal Neurosonography: A Comprehensive Review. Journal of Clinical Medicine, 13(18), 5626. https://doi.org/10.3390/jcm13185626

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop