Next Article in Journal
Electrophysiological and Imaging Biomarkers to Evaluate Exercise Training in Patients with Neuromuscular Disease: A Systematic Review
Next Article in Special Issue
Clinical Ultrasound Applications in Obstetrics and Gynecology in the Year 2024
Previous Article in Journal
Comparison of Physical Activity Patterns among Three Major Chronic Respiratory Diseases
Previous Article in Special Issue
AI-Enhanced Analysis Reveals Impact of Maternal Diabetes on Subcutaneous Fat Mass in Fetuses without Growth Alterations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology

1
Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
2
Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
3
Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2023, 12(21), 6833; https://doi.org/10.3390/jcm12216833
Submission received: 21 September 2023 / Revised: 17 October 2023 / Accepted: 25 October 2023 / Published: 29 October 2023
(This article belongs to the Special Issue Clinical Imaging Applications in Obstetrics and Gynecology)

Abstract

:
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.

1. Introduction

Artificial intelligence (AI) is known to be present in everyday life, and over the past years it has gained considerable significance in medical imaging. The term AI refers to various types of computer science technologies, which enable machines to perform tasks simulating human intelligence. AI systems are typically dependent on the input of vast amounts of data, e.g., for pattern recognition, in order to learn to create predictions, classifications, recommendations, or decisions either supervised by humans or without supervision. Terms like machine learning or deep learning represent two of the numerous subcategories of AI [1].
Widely described advantages of AI usage include improved productivity, efficiency, and reduction in human error. These benefits are what make AI exceptionally attractive for application in health care and, particularly, in medical imaging [1]. To comply with the growing demand for AI software in health care, the U.S. Food and Drug Administration has currently certified a list of 178 AI-enabled medical devices, which is continuously growing [2].
The field of obstetrics and gynecology (OB/GYN) is known to be one of the most applied imaging specialties using the diagnostic tools of ultrasound (US), magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), laparoscopy, and others. In particular, the subspecialty of obstetrics is profoundly dependent on the diagnostic imaging tool of US because of its non-invasive, cost-effective, real-time, and low-radiation characteristics for fetal scanning [3,4].
However, US imaging has limitations regarding its comparability and reproducibility. Reasons for the high image variability can be low image quality, the need for real-time interpretation, differences in US devices, and the dependence on the sonographer’s experience [4]. The limitations increase when US is performed during pregnancy, facing obstacles such as imaging artefacts by maternal (e.g., thickened abdominal wall in obese patient) or fetal (e.g., fetal position or movements) factors, reduced tissue contrast (e.g., with reduced amniotic fluid), and characteristics of increasing gestational age (GA) (e.g., growing fetal volume and increasing ossification of bones) [4]. In clinical settings, US is known to be time consuming and can require a substantial amount of training and experience for specific indications, such as fetal echocardiography or neurosonography [5,6,7].
To overcome these restrictions, the application of AI assistance in US imaging has been shown to reduce examination time, clinician workload, and inter- and intra-observer variability [8]. US imaging in OB/GYN represents a promising field of application for AI models due to the wide range of indications and generation of high-volume data sets. Various operators with different skill levels and different US devices are challenging aspects influencing AI model performance.
However, the use of AI models is not without discussion about its ethical context [3]. Despite all efforts to automatize US imaging, existing literature still emphasizes the fact that AI is not meant to replace human work and input, but should assist them and reduce the increasing workload [3,6]. As stated in the state-of-the-art review by Drukker et al. [3], to date, no AI method exists that can be generalized to different tasks compared with an OB/GYN specialist capable of performing US scans of different organs and the fetus in various GA periods. Therefore, the variety of AI models in the subspecialties of OB/GYN is tremendous and is worth reviewing within the specific tasks.
So far, most original research articles and literature reviews in OB/GYN have focused on the common fields of AI application in medical imaging, such as the identification of breast or adnexal masses, automated fetal biometry, or fetal echocardiography. To the best of our knowledge, this review is the first to systematically display the variety of fields of applications among the subspecialties of OB/GYN.
The term ‘5D ultrasound’ is derived from the idea to expand the technological US world with a further dimension. While 4D technology extends the 3D view of the scanned object with a time frame, enabling motion visualization, or so-called ‘real-time 3D’ [9], the term 5D is uncommonly used to describe AI-assisted US imaging including image enhancement processing or automated calculations [10,11]. As there is no clear definition for 5D US, we use the expression of the ‘5D ultrasound’ in this literature review to illustrate the extend of AI models that can create a new dimension in US imaging in OB/GYN by improving work efficacy, accuracy, and visibility in clinical settings.
This study aims at providing an overview of the recent literature on applications for AI in US imaging in the medical field of OB/GYN by working out the benefits and limitations of AI US support systems. Special focus is given to the distribution of research attention among the subspecialties of OB/GYN and researching the emerging and still experimental fields to promote further research for clinical applicability. Assessing the effectiveness of applied AI models is not aim of this study; therefore, all AI technologies are summarized by the term ‘AI’. By describing the current research emphasis, possible missing scopes of application may be enlightened.

2. Materials and Methods

This systematic literature review was developed in accordance to the updated Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [12,13]. The study was prospectively registered at the International prospective register of systematic reviews (PROSPERO) with registration number CRD42023434218.
For the literature research, the PubMed Database was searched on 14 May 2023 for records using the following search query and using the text availability filter ‘abstracts’:
((artificial intelligence) OR (deep learning) OR (machine learning) OR (artificial neural networks)) AND (ultrasound) AND ((obstetrics) OR (gynecology) OR (pregnancy)). Additionally, the Cochrane Library was searched on 14 May 2023 with the following query:
(Artificial intelligence OR deep learning OR machine learning OR artificial neural networks) AND ultrasound AND (obstetrics OR gynecology OR pregnancy) [Title Abstract Keyword]
No restriction for year of publication was applied. Relevant records in English or German were independently screened based on the title and abstract by two authors for their accordance with the eligibility criteria. Cases of incongruence were discussed in a consensus meeting. For adequate comprehensiveness of the search process, the PICOS search tool (Participants, Intervention or Exposure, Comparison, Outcome, Study type) was applied and used for judgement [12,13]. Table 1 shows the relevant literature characteristics presented as PICOS search tool headings. Records without the use of AI or US and studies focusing on AI applications in specialties other than OB/GYN were excluded in the screening process. Records describing AI calculations using data obtained from US measures but not the image itself (e.g., crown-rump-length (CRL) or cervical length) were excluded.
After the initial screening process, full text copies were retrieved for further analysis of the inclusion criteria. By extracting fields of applications, articles were distributed to the sections of either obstetrics or gynecology. By reading all full text copies, the topic of AI application (e.g., fetal neurosonography or identification of breast masses) and the specific benefits and limitations of the AI application in the presented field were extracted. The proportion of research topics for the included literature are illustrated in two figures for the subspecialties of OB/GYN.

3. Results

3.1. Included Literature

Figure 1 depicts the PRISMA flow diagram for the screening process of reports included in this review. A total of 737 records were identified from the searched databases, resulting in 189 records considered adequate for inclusion in this review. Here, 148 records described the application of AI in US imaging in the field of obstetrics compared with 41 records in the field of gynecology (Figure 1). The included articles are displayed in Table S1 of the Supplementary Material for obstetric and Table S2 of the Supplementary Material for gynecological applications. In the following Results section, included articles are evaluated separately for both specialties.

3.2. Applications in Obstetrics

Related to the subspecialty of obstetrics, 148 research articles form part of this systematic literature review. In Figure 2, an overview of the various research topics presented in the included literature is depicted.

3.2.1. Fetal Biometry

The most common application of obstetric US examination remains the assessment of fetal growth, followed by the sonographic examination maternal−fetal perfusion parameters, fetal malformations, placental morphology, or uterine abnormalities. Fetal growth assessment represents a repeated standardized examination throughout pregnancy to monitor fetal development and predict birth weight, and consequently may influence decision making for the timing of delivery. Biometric calculations are based on the acquisition of standard planes for measurements of fetal head circumference (HC), abdominal circumference (AC), and femur length [14]. The accuracy of these measurements is often reduced as a result of operator-, equipment-, patient-, and fetus-related factors.
  • Operator-related factors: US in general and obstetric US in particular, are known to require substantial experience and to be extremely training dependent [15]. US examinations are known to inherit high intra- and inter-operator variability [16].
  • Equipment-related factors: Especially in hospital settings, repeated examinations may be performed with different US machines, resulting in heterogenous data. Furthermore, image quality depends on resource availability and access to high-end US devices [17], or the use of point-of-care devices [18].
  • Patient-related factors: Maternal obesity is known to have an impact on image quality and visualization of the fetus, and thus limits the accuracy of obstetric US examinations [19].
  • Fetus-related factors: Fetal size, movements, and position, as well as multiple pregnancies or reduced amniotic fluid resulting in low contrast to surroundings may decrease the accuracy of measurements [20,21].
To minimize operator- and/or equipment-related influences, recently, there have been attempts to automate measurements in obstetric US using AI algorithms. However, these attempts are characterized by their complexity and limitations due to the inevitable patient- and fetus-related constraints.
This review encompasses 27 research articles on the use of AI in the detection, measurement, and assessment of standard planes in obstetric US, with years of publication ranging from 2007 to 2023. Only three of the included studies investigated the use of AI algorithms in 3D US images [22,23,24], while 24 focused on 2D images. Most studies reported the combined analysis of various standard planes (13 studies, [15,17,23,24,25,26,27,28,29,30,31,32,33]) or exclusively presented an algorithm for the analysis of HC (9 studies, [18,22,34,35,36,37,38,39,40]), AC (4 studies, [21,41,42,43]) or femur length (1 study, [44]). Five studies used the freely available ‘HC18’ data set [45] for training and testing the algorithms, which contained 1334 2D images from 551 women of the standard fetal head plane [18,34,37,38,40].
AI algorithms for the automated detection of various standard planes in US video scans have been reported by Płotka et al. [25], Chen et al. [28], and Baumgartner et al. [15]. The unique study of Sendra-Balcells et al. presented a deep-learning model to identify standard planes in 2D images showing the transferability of the AI method to six low-income African countries [17]. In comparison with the analysis of 2D US videos, Sridar et al. [26], Burgos-Artizzu et al. [27], Rahman et al. [30], and Carneiro et al. [31,32] reported AI systems for the automated detection or measurement of various standard planes in 2D US images. Zhang et al. proposed an image quality assessment method for evaluating whether US images of standard planes fully show the anatomical structures with clear boundaries [29]. To improve clinical workflow efficiency, Luo et al. evaluated the intelligent Smart Fetus technique for its ‘one-touch’ approach to search and automatically measure the cine loop for standard planes once the sonographers press the freeze button [33]. Pluym et al. and Yang et al. analyzed 3D US volumes for the localization and measurement of various intracranial standard planes, such as the transventricular, transthalamic, or transcerebellar plane [23,24].
The automated detection and measurement of HC in 2D images was investigated by the research groups Zeng et al. [18,34], Li et al. [36,40], Yang et al. [37], and Zhang et al. [38]. Likewise, Van de Heuvel et al. and Arroyo et al. presented a system for automated analysis of HC, particularity the use of a standardized sweep protocol for data collection to eliminate the need for sonography experts and enable applicability in underserved areas [35,39]. The only study to identify the HC and biparietal diameter from 3D US volumes using the commercially available Smartplanes® software was presented by Ambroise Grandjean et al. [22].
The acquisition and measurement of AC were considered in the studies of Jang et al. and Kim et al. analyzing 2D images [21,41], as well as by Ni et al. and Chen et al. analyzing 2D videos acquired by graduate students [42,43].
Remarkably, only one study, from Zhu et al., reported the automated assessment of femur lengths in 2D US images addressing the difficulties regarding femur lengths acquisition due to the complex background in femur US images [44].
In conclusion, the automated acquisition and measurement of standard planes is an increasingly investigated area, but still faces problems when it comes to clinical applicability and generalization. The benefits of AI usage in standard plane acquisition were found to be the possibility of real-time applicability [15,25,32,33]; the incorporation of clinical aspects into image interpretation [21,23,26]; the feasibility of biometric assessment by non-experts [35,39,43]; the application of a lightweight algorithm in point-of-care devices [18,34,37]; and, as a consequence of the latter two aspects, applicability for medically underserved areas [17,35].
The limitations of AI applications for fetal biometry were found to be reduced algorithm accuracy in poor quality images due to high maternal BMI [22,23], low contrast of anatomical structures [36], and higher GA with large fetuses [21]. Other reported limitations were slow processing times [35,43] and the lack of training for algorithms with pathological cases [28,35].

3.2.2. Fetal Echocardiography

For the detection of the most common congenital malformations, which are known to be congenital heart diseases (CHDs), with an incidence of 6–12/1000 livebirths [46], fetal sonographic examination is usually performed in the second trimester [47,48]. The prenatal diagnosis of CHD is of substantial significance, resulting in improved neonatal outcomes compared with postnatal diagnosis. It allows for appropriate counseling for parents, as well as delivery and treatment planning, and, in some cases, even in utero therapy [49].
Fetal echocardiography is a highly challenging technique, even for experts, and is primarily based on the acquisition of standard views, such as the four-chamber view (4CV), three-vessel view, three-vessel trachea view, and left and right ventricular outflow tract view. The combination of standard views allows for the detection of up to 90% of CHD; however, in clinical practice, the detection rate is only about 30% [48,50]. Reasons for low detection rates were described as insufficient sonographer interpretation and inadequate acquisition of standard views, which were often due to fetus-related factors such as fetal position, movements, and the small size of the fetal heart and its possible defects [50].
This review includes a total of 23 articles from 2007 to 2023 related to fetal echocardiography. Four of the included studies investigated the application of fetal intelligent navigation echocardiography (FINE) as a reliable technique that enabled the automated acquisition of nine standard echocardiographic views from specific 4D volumes of a single cardiac cycle in motion [51,52,53,54]. It enabled operator-independent examination and contributed to the standardization of fetal echocardiography [52]. While Yeo et al. reported the time-saving benefits of workflows using FINE in 51 normal fetal cardiac anatomy and 4 different CHD cases [51], Ma et al. confirmed the application of FINE in abnormal anatomical hearts through the successful generation of three standard views in 30 fetuses with double-outlet right ventricles [53]. In a case report, Veronese et al. reported the successful detection of four atrioventricular septum defects using the FINE system [54].
Five of the included studies investigated the performance of AI models that can detect structural abnormalities in cardiac anatomy. Dozen et al. presented a method specific to the interventricular septum [55]; Han et al. focused on the assessment of the left ventricle and left atrium [56]; and Xu et al. aimed at identifying seven anatomical structures, namely right and left atrium and ventricle, thorax, descending aorta, and epicardium [57]. The automated detection of standard views was investigated by Wu et al., Yang et al., and Nurmaini et al. [58,59,60], while CHD was effectively detected by AI models generated by the research groups of Gong et al. and Nurmaini et al. [61,62], and selectively for the diagnosis of total anomalous pulmonary venous connection by Wang et al. [63].
Three recent studies assessed fetal cardiac function. Yu et al. automatically measured left ventricular volume in 2D US images [64], Herling et al. analyzed the automated measurement of fetal atrioventricular plane displacement in US videos of cardiac cycles using color tissue Doppler [65], and Scharf et al. evaluated the automated assessment of the myocardial performance index as a tool to analyze fetal cardiac function [66]. Lastly, two included studies developed AI models for model improvement itself by synthesizing high-quality 4CV images for model training [67] and by providing existing models with new input data and supporting learning process [68].
The complexity of fetal echocardiography itself is derived from the skill needed to detect even the smallest anatomical abnormalities in a beating organ, which makes it an interesting and challenging research area for AI applications. Advantages are the facilitation of standard view acquisition [58,60] and CHD detection [53,61,62], as well as a significant reduction in examination time [52,55]. Furthermore, Arnaout et al. outlined the benefits of their AI model for telehealth approaches and diagnoses of rare diseases [50]. In case of the study of Emery et al., an AI-based navigation system for needle tracking in fetal aortic valvuloplasty promised increased safety, reduced intervention time, and transferability for other fetal interventions such as amniocentesis [69].
As a result of AI models acquiring US images, the need for quality control mechanisms has arisen to ensure image quality. This issue was addressed by Dong et al. and Pietrolucci et al., who developed quality assessment AI models, of which one is already commercially available, known as ‘Heartassist™’ [70,71]. Furthermore, to address the ‘black box problem’, which describes the complexity of algorithms impossible for human understanding, Sakai et al. proposed a method to support fetal echocardiography through ‘explainable AI’ [7]. This technique aims at promoting the trustworthy use of AI methods for clinicians through the development of specific AI modules for the explanation of the algorithm behavior.
AI assistance in fetal echocardiography showed several limitations. First, most of the studies only used 4CV images as the input for their algorithms [55,57,61,62,63,67,70], although the detection rate of CHD could be increased by analyzing different standard views [55]. These studies predominantly used apical 4CV, which resulted in AI model limitations when analyzing 4CV from different scanning angles, such as the fetal dorso-anterior position. This issue of the need for the correct identification of the region of interest (ROI) for optimized AI model performance was addressed by the study of Xu et al. [57]. Second, analyzed images have often been obtained only from healthy fetuses with normal cardiac anatomy and AI models lacked training with pathologic findings [52,55,57,64]. Furthermore, even with the assistance of AI methods, experienced sonographers were required for rechecking and the interpretation of results [7,51]. Lastly, the recognition of small CHD or small anatomical structures such as the trachea was limited in some models [55,58].
In summary, fetal echocardiography extensively profits from AI assistance, but shows limitations that need to be addressed in further research. Beside the aforementioned need for the automated detection of ROI, which has been recently proposed in the literature [72,73], other fields of application are of emerging interest. Not only the detection of structural abnormalities in case of CHD, but also cardiac function analysis, is a future topic of AI applications in fetal echocardiography using tissue Doppler US, which can be relevant, e.g., for fetuses diagnosed with hypoplastic left heart syndrome [74,75,76].

3.2.3. Fetal Neurosonography

Fetal neurosonography focuses on the assessment of fetal brain development and the identification of abnormalities [77]. For sonographic assessment, standard head planes should be acquired following the international guideline of the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) [77], which enables the detection of key anatomical structures such as lateral ventricles, cavum septum pellucidum, cerebellum, and cisterna magna. Sonographers performing neurosonography require an accurate understanding of fetal neuroanatomy, the skill to interpret 2D planes in a complex 3D structure, and, consequently, substantial clinical experience and training [78].
On the topic of neurosonography, 19 studies were included in this review, ranging from 2017 to 2022 in years of publication. Almost half of the included studies investigated AI applications in US using 3D volumes. Three of the included studies [78,79,80] used data collected in the Fetal Growth Longitudinal Study of the INTERGROWTH-21st Project, which aimed at developing international standards in fetal growth and size [81].
The research topics in this field are heterogenous, presenting a wide variety of applications for AI-assisted methods. The establishment of a plane localization system as a 3D reference space for locating 2D planes was proposed by Yeung et al., Namburete et al., Yu et al., and Di Vece et al. for improving the acquisition of standard planes and facilitating anatomical orientation for sonographers [78,80,82,83]. In particular, the method by Di Vece et al. used a 23-week synthetic fetal phantom for system development and was the only study to estimate the 6D poses of US planes combining common 3D planes with rotation around the brain center [82]. Xu et al. presented an AI method for authentically simulating third-trimester images from second-trimester images for deep-learning researchers with restricted access to third-trimester images [84]. The automated detection of brain structures and malformations was described by Lin et al. [85,86], Alansary et al. [87], and Gofer et al. [88] in 2D images and videos, and in 3D volumes by Hesse et al. and Huang et al. [79,89]. The image quality assessment of whether a standard plane was correctly acquired, either by human operators or by automated extraction from 3D images, was effectively performed by models developed by the research groups of Lin et al. [90], Yaqub et al. [91], and Skelton et al. [92]. Researchers Xie et al. [6,93] and Sahli et al. [94] reported a method for classifying US images into a binary system of ‘normal’ and ‘abnormal’ cases, in which Xie et al. additionally localized the structural lesions, which lead the algorithm to declare it ‘abnormal’ and thus recommend the clinician to recheck the labeled area. Lastly, the studies of Burgos-Artizzu et al. and Sreelakshmy et al. portrayed AI methods for the estimation of GA through an analysis of transthalamic axial planes or cerebellum measurements [95,96].
The benefits of the usage of AI algorithms in fetal neurosonography were, beside a reduced workload for sonographers due to faster acquisition and measurements [86], the development of guiding methods for skill training [78,83], the measurement of small anatomical structures such as the fetal cortex in first trimester [88], and the accurate estimation of GA in a pregnancy without a valid first trimester scan [95].
The primary limitation of AI US imaging in this topic was described to be the rapid anatomical development of fetal brain structures due to brain maturation, increasing head size and degree of ossification with rising GA [78,80,84]. Ossification of the fetal skull provoked an increase in the shadowing of US images and thus reduced image quality and visibility [80]. To address the heterogeneity in brain images from different GA, studies described the need for matching GA of US images in algorithms [82,94]. Other study limitations were the missing training of AI algorithms with images of pathologies [79,86,95] and the problem of miscalculations when US images were not in accordance with the guidelines for standard planes [6,80].

3.2.4. Fetal Face

With advances in obstetric US and the possibility of 3D and 4D US, the analysis of the fetal face has become feasible and of rising interest. This section encompasses five articles from 2018 to 2023, with heterogenous research topics.
Fetal facial malformations, such as cleft lip and palate, can be assessed by acquiring standard planes such as the ocular axial, median sagittal, and nasolabial coronal plane [97]. Wang et al. and Yu et al. presented AI algorithms to automatically identify standard planes in 2D images [97,98]. However, as facial malformations can be a phenotype of an underlying genetic disorder, Tang et al. used 3D images of fetal faces to develop a novel approach for the early, non-invasive identification of genetic disorders by analyzing key facial regions, such as the jaw, frontal bone, and nasal bone [99].
Additionally, fetal movements and facial expressions were found to be correlated with fetal brain activity and development state [100]. Facial expressions such as eye blinking, mouthing, smiling, and yawning have been described to indicate fetal brain maturation, in utero stress may result in scowling, while the meaning of tongue expulsion and neutral expression remain unclear [101,102]. Miyagi et al. proposed an AI classifier analyzing 4D US volumes to assess fetal facial expressions and classify them into different categories [102,103], and showed that the identification of dense and sparse states of brain activity is possible [104].

3.2.5. Placenta and Umbilical Cord

The placenta is known to play an important role in the pathogenesis of obstetric complications such as placenta previa, abnormally invasive placenta, fetal growth restriction, and hypertensive disorders of pregnancy [105]. Little evidence exists on the neglected role of the placental characteristics and the prediction of these complications [106]. For example, research has shown a correlation between early placental sonographic echogenicity and the prediction of intrauterine growth restriction [107]. To date, sonographic placental assessment is mainly restricted to the identification of placental location, adhesion, or insertion site of umbilical cord [108], and further assessment is limited due to the impossibility to detect minimal change in texture by routine scan and time-consuming examinations. The use of AI imaging algorithms has recently enabled automated assessment of the placental volume, tissue texture, and vascularization, and is thus of rising research interest.
In this review, 20 articles were included that were published from 1994 to 2023, whereby the three studies from 1994–1996 investigated umbilical cord Doppler analysis and studies from 2014–2023 focused on placental analysis. While part of the included studies focused on the automated assessment of placental localization and volume, others reported efforts to identify or predict the presence of placenta-related obstetric complications by analyzing echogenic tissue texture.
Andreasen et al. and Schilpzand et al. presented an effective AI algorithm for placental localization, including heterogenous data through differences in sonographers’ expertise [109], or using a previous established sweep protocol in low-resource settings [110]. It is known that early reduced placental volume is associated with small-for-gestational-age fetuses [111]. Schwartz et al. and Looney et al. presented an effective model to automatically assess placenta volumes from 2D and 3D images in first trimester [112,113]. Hu et al. performed an echotexture analysis in 2D placental images [108], while Qi et al. reported successful automated localization of the placental lacunae in 2D images as a potential tool for screening abnormally invasive placenta [114,115]. The automated classification of placental maturity was proposed by Lei et al. and Li et al. [116,117]. Early and small changes in placental tissue texture were detected by Gupta et al. and Sun et al. through using AI-assisted US and microvascular Doppler imaging in women with hypertensive disorders of pregnancy [118,119] and gestational diabetes [120].
Further examples of adverse pregnancy events with an often disastrous maternal and fetal outcome are placental abruption and pernicious placenta previa. Yang et al., therefore, investigated the predictive role of a scoring system for the occurrence of pernicious placenta previa [121], while the research group of Asadpour et al. reported a method for identifying placental abruption [122].
In addition to placental characteristics, research exists on the role of umbilical cord anatomy and blood flow. Pradipta et al. investigated the use of machine learning methods to classify 2D color Doppler US images of umbilical cords along the umbilical coiling index and its possible impact on fetal growth [123]. In the earliest studies included in this review, Beksaç et al. and Baykal et al. presented an automated diagnostic, interpretation, and classification method to analyze umbilical artery blood flow Doppler US images [124,125,126].
The importance of pre-operative planning for surgical interventions such as laser-therapy in twin-to-twin-transfusion syndrome was addressed by the research group of Torrents-Barrena et al. [127,128]. They proposed a new AI algorithm for the simulation and planning of fetoscopic surgery through the detection and mapping of the maternal soft tissue, uterus, placenta, and umbilical cord via MRI in combination with the detection of the placenta and its vascular tree in 3D US. This model fully simulates the intraabdominal environment and enables the correct entry point planning and surgeon’s training [127,128].
In summary, placental AI-based US diagnostic may propose a promising non-invasive, predictive tool to improve patient counseling and management to prevent adverse pregnancy outcomes. Reported limitations in applications arose from the difficulty of identifying the interface between the placenta and myometrium, especially in first trimester scans [113], and low accuracy rates in the assessment of posterior wall placentas [109]. Further research is necessary to identify the link between placental health and obstetric complications.

3.2.6. Fetal Malformations

First Trimester Scan

The timing of the first trimester scan is standardized to 11 + 0 and 13 + 6 weeks of gestation and its performance of image acquisition is defined by a protocol of the Fetal Medicine Foundation [129] and the ISUOG [130]. The purpose of this US examination includes confirmation of viability; assessment of GA; screening for preeclampsia; and detecting chromosomal anomalies such as trisomy 13, 18, or 21 or other malformations. Combining clinical information (maternal age and serum parameters) with sonographic assessment of fetal characteristics, predominantly the assessment of nuchal translucency (NT), is recommended practice [131].
Seven studies included in this review focused on the AI application for first trimester scans, ranging from 2012 to 2022. The research groups Walker et al. [132], Zhang et al. [133], Sciortino et al. [134], and Deng et al. [135] addressed the time consuming process of NT measurement by introducing AI models for its automated detection and measurement, in particular for the diagnosis of trisomy 21 [133] or cystic hygroma [132]. Tsai et al. aimed at facilitating the preliminary step for NT measurement, which was the automated detection of the correct mid-sagittal plane in 3D volumes [136], and Ryou et al. and Yang et al. proposed a model for the assessment of the whole fetus in 3D volumes [137,138]. The potential benefits of these models were the highly accurate non-invasive method for anomaly screening [134] and reduced workload [133,135,136,137]. Limitations could be uncovered when assessing fetal limbs due to its small anatomy and close surroundings [136,137,138], small data sets in rare anomalies [132,133], and missing real-time application for clinical applicability [133,134,138].

Second Trimester Scan

The timing of the second trimester scan is standardized to 18 + 0 to 23 + 6 weeks of gestation and is intended for the evaluation of fetal growth and detection of fetal malformations [139].
Four studies included in this review focussed on the detection of fetal malformations in mid-trimester US scans. Matthew et al. prospectively evaluated a model for automated image acquisition, measurements, and report production [140]. Cengizler et al. proposed an algorithm for the identification of the fetal spine and proofed the model’s performance in cases of fetuses with spina bifida [141]. Furthermore, Meenakshi et al. focused on the identification of fetal kidneys [142] and Shozu et al. presented an AI model for the identification of the thoracic wall, which enabled plane detection for 4CV, but also allowed for the detection of thoracic malformations [143]. All of the studies showed a reduction in examination time, which helps sonographers concentrate on interpretation instead of repetitive tasks [140].

3.2.7. Prediction of Gestational Age

The estimation of GA is one of the important indications for obstetric US in early pregnancy that helps to adjust maternity care and identify complications such as prematurity or fetal growth disorders [144]. It is usually calculated using the last menstrual period and is confirmed with fetal CRL and biometry. In low-resource countries, access to medical care is constrained and ultrasonography and their operators are rare. In these areas especially, pregnancy complications play an important role and improvement in diagnostic resources for correct GA measurement as a prerequisite for adequate maternity care is thus necessary [145].
This literature review includes 10 studies on the assessment of GA, starting in 1996 with a pioneering study of Beksaç et al. on the estimation of GA via the calculation of the fetal biparietal diameter and HC [146]. In addition to this, the studies of Namburete et al. and Alzubaidi et al. similarly used the anatomy and growth of the fetal head for GA estimation [147,148]. Dan et al. developed a DeepGA model that used the three main factors of fetal head, abdomen, and femur [149], while Lee et al. proposed a machine learning method to accurately estimate GA with standard US planes [150]. The recent topic of point-of-care-US was addressed in the research of Maraci et al., who successfully showed automated head plane detection and GA estimation with point-of-care devices [151]. Lastly, four studies used data from the Fetal Age Machine Learning Initiative (FAMLI), which is an obstetrical US development project in low-income settings. The purpose of these studies, which were based on US data from the US and Zambia, was the successful establishment of an AI algorithm for GA estimation from simplified blind US sweeps of US novices in low-resource countries [144,145,152,153]. The benefits of the GA AI models were the possibility for application in low-resource countries [144,145,148,149,152], even without internet connectivity [145], and in portable devices [148], promising high accuracy with an error of 3.9 to 5 days in GA estimation [149,152]. An important limitation of the AI models was described to be application in very early [144,147] or very late stages of pregnancy [146,147,152], the latter of which was due to the thickened texture of the fetal skull.

3.2.8. Workflow Analysis of Obstetric Ultrasound Scans

Over the past decades, obstetrics US has gained immense advances in US technology and computational power, including AI processes, but the procedure of acquiring the image itself by a bedside-acting clinician has remained unchanged. As it is known that acquiring obstetric US skills is a long-lasting and highly demanding task, efforts have been made to analyze the workflow of experienced sonographers and draw conclusions about the interaction between the sonographer, probe, and image [154].
All eight included studies investigating this topic arose from the same working group of the University of Oxford, UK. The PULSE (Perception Ultrasound by Learning Sonographer Experience) project, presented by Drukker et al., was designed to enable insights into experts’ sonography workflow and to transform the learning process of obstetric US using deep-learning algorithms [154]. Its data set was within the framework of all of the included studies. While Drukker et al. analyzed eye and transducer movements, actions during scanning, and audio recordings to generate automated image captioning of the sonographer’s explanation, Sharma et al. added the pupillometric data to objectify not only the localization of the sonographer’s gaze on the screen, but also the intensity of concentration in this focus [155]. Completeness, precision, and speed of sonographic performance were assessed by Wang et al. to quantify operator skill level [156]. Zhao et al. presented a method for virtual-assisted probe movement guidance along a virtual 3D fetus model with automatic labeling of the captured images [157]. Sharma et al. and Drukker et al. analyzed full-length scanning videos to assess workflow by building timeline models illustrating the scanning sequence of anatomical regions over time [155,158]. Lastly, Alsharid et al. developed a novel video captioning model for the description of second-trimester US scans by training an AI model with speech recordings and gaze-tracking information of sonographers while performing US scans [159].
To sum up, the included studies on this topic showed clinical benefits of AI in scan workflow through automating image labeling [154,157], enabling transfer learning for US novices [160], reducing clinician’s mental workload, and optimizing workflow [155]. Limitations in application were reported to be the impossibility of the generalization of workflow sequences due to maternal−fetal factors and different skill levels of sonographers for different anatomical regions during a full routine scan [155,158].

3.2.9. Other Applications in Obstetrics

Fetal Lung Maturation

The maturation process of the fetal lung is an important aspect in clinical practice as it implies the leading cause for neonatal morbidity and mortality [161]. The GA of the fetal lung does not always correlate with the actual GA and can be influenced by pregnancy complications disrupting lung maturation.
Five included studies analyzed fetal lung US images for the prediction of neonatal outcomes. Du et al. proposed an AI model to classify the lung textures in pregnancies affected by gestational diabetes or preeclampsia [162], while Xia et al. and Chen et al. developed a lung maturation grading model that can be implemented for identifying abnormal development and evaluating the effectiveness of antenatal corticosteroid therapy [161,163]. The study of Bonet-Carne et al. and a further study of Du et al. showed that automated fetal lung ultrasound was able to accurately predict neonatal respiratory morbidity [164,165]. While Du et al. analyzed lung images from healthy and affected pregnant women, the studies of Xia et al. and Chen et al. were limited by the analysis of only healthy fetuses [161,163].

Maternal Factors

US in OB/GYN usually focusses on the examination of the fetus; however, there are several indications to evaluate maternal structures.
Four included studies were summarized in this section. The early study of Wu et al. proposed a tool for preterm labor prediction by using computer-assisted measurement of the cervix on transvaginal US images to overcome the issues of poor reproducibility and sonographer dependency on manual cervical length measurements [166]. The model of He et al. addressed the challenge of the identification and classification of intrauterine pregnancy residues that had the potential to reduce associated complications and improve surgical outcomes of curettages [167]. Wang et al. presented an algorithm to assess the color Doppler US images of fetal and maternal vessels as an approach to facilitate the diagnosis of severe preeclampsia linked to the medical outcome [168]. Lastly, Liu et al. proposed a Doppler US model for the prediction of fetal distress in women with pregnancy-induced hypertension [169]. These algorithms may improve medical diagnosis and potentially reduce clinical workload by replacing the sonographer’s manual tracing [168].

Early Pregnancy

Three studies included in this review focused on the US examination in early pregnancy, which is often performed to confirm intrauterine localization and vitality of pregnancy, to estimate GA via measuring CRL, or to diagnose adverse pregnancy outcomes such as miscarriages.
While Wang et al. prospectively analyzed an automated assessment of the gestational sac as a predictor for early miscarriages in 2D images in pregnancies of 6–8 weeks of gestation [170], the research groups Sur et al. and Looney et al. proposed an 3D AI model to provide volumetric measurements of the embryo, placenta, gestational sac, yolk sac, and amniotic fluid [113,171]. The potential benefits of these models were the development of fetal volume nomograms for precise fetal growth assessment [171] and the establishment of an early screening method for the prediction of adverse pregnancy outcomes [113], which facilitates adequate consultancy and recommendation for follow-up US examinations [170]. However, validation of the results is difficult in this field of application, which limits clinical applicability.

Intrapartum Sonography

The application of US during the second stage of labor to assess the progression of childbirth is a recent development to improve obstetric management. Transperineal US is, therefore, used to objectivate vaginal digital examination when estimating fetal head descent.
In a prospective, multicenter study, Ghi et al. established an AI model for automatically classifying fetal head position and distinguishing between occiput anterior and non-occiput anterior position of the fetal head, because the latter may result in protracted labor and increased risk of a poor obstetric outcome [172]. In addition to this, Lu et al. and Bai et al. proposed a method for the automated measurement of the angle of progression, which allowed for the estimation of fetal head descent by identifying the symphysis and fetal head contour [173,174]. All three studies showed promising results but lacked clinical applicability for their missing real-time application.

Image Quality

In terms of the quality control of US images, four studies were included in this review that applied their algorithms to different aspects.
Wu et al. proposed a computerized quality assessment scheme for quality control of US images by identifying the ROI in fetal abdominal images [175]. Meng et al. established a model for the classification of shadow-rich and shadow-free regions in various US images [176] and Gupta et al. presented an algorithm for a better separation between the fetus and surrounding information in fetal US, such as maternal tissue, placenta, or amniotic fluid [177]. Lastly, Yin et al. showed improved image quality when using an AI algorithm for image processing in US images of the pelvic floor [178]. Beside improved US image quality [176,177,178], the benefits of automated quality control algorithms are the facilitation of image acquisition by novices and experts, reduced workload, and the development of toolkits for education [175].

Miscellaneous

Five studies investigating various areas are summarized in this section.
The study of Cho et al. proposed a model for the automated estimation of amniotic fluid, as it is known to be a particular observer-dependent factor and, therefore, benefits from automation [179]. Compagnone et al. presented a clinical case report of a successful AI-image-guided placement of an epidural catheter in an extremely obese patient for delivery [180]. The research group Maraci et al. developed an AI model to detect the fetal position and heart beat from predefined US sweeps. Further, Rueda et al. aimed at investigating the fetal nutritional status using AI-assisted assessment of the adipose and fat-free tissue of the fetal arm in US images [181]. Lastly, an AI model for the automated classification of fetal sex in 2D US images of the genital area was established by Kaplan et al. that helped reduce misclassification and facilitate screening [182].

3.3. Applications in Gynecology

Focusing on the specialty of gynecology, 41 research articles were included in this review. Figure 3 provides an overview of research topics on AI applications in gynecological US.

3.3.1. Adnexal Masses

Adnexal masses are among the common reasons for US examination in gynecology due to the importance of ovarian cancer detection. The assessment of adnexal findings is crucial for further diagnostic steps and therapy planning, which differ significantly between benign and malignant tumors. In the clinical setting, the examination of adnexal masses is primarily performed via transvaginal US, combining grayscale 2D images with color Doppler imaging to assess vascularization. The identification and especially classification of adnexal findings represents a challenging task even for experienced examiners and thus the International Ovarian Tumor Analysis (IOTA) group has established US-based rules for classification of adnexal tumors [183]. In recent years, the automated analysis of US images of adnexal masses has gained attention due to its advantage in supporting unexperienced examiners and assisting experienced examiners in diagnostic decision making.
Of the 11 extracted articles, only two were designed as prospective studies [184,185]. Included studies were published from 2009–2023, whereby the research group Amor et al. was the first to describe AI application in sonographic assessment using a non-specified pattern recognition analysis to classify adnexal masses in a new reporting system [184]. All but one of the studies analyzed 2D images, with only three of them including color Doppler images.
Enabling an automated discrimination between benign and malignant tumors was a predominant focus of the current research, represented in six studies included [185,186,187,188,189,190]. Three studies assessed the performance of automated tumor classification [184,191,192], one study developed a population-based screening method for BRCA mutations [193], and one study focused on the automated elimination of artefacts and objects in US images to increase the accuracy of the AI model [194]. Aramendía-Vidaurreta et al. was the only group to investigate the automated discrimination of benign and malignant masses in 3D US images [187] and Hsu et al. distinguished between transabdominal and transvaginal US images [185].
All of the included studies showed a high accuracy and sensitivity of AI performance. The study by Gao et al. used a large, multicenter, and heterogenous data set, which disclosed that AI-enabled US outperformed an average trained radiologist in discriminating malignant and benign ovarian masses and improved the examiner’s accuracy [189]. These findings were consistent with other studies [185,186], but there were also studies with smaller sample sizes that showed a level of performance reaching those of human experts [188,191,192].
Nevertheless, a described limiting aspect was the fact that clinicians using AI image analyzing algorithms must still take clinical aspects into account [188,189]. Furthermore, metastases or secondary ovarian cancer in pelvic images may be misinterpreted because of their different clinical presentation and their low representation in the data set [189]. Frequent described limitations of studies on AI applications in US imaging were homogeneity of data due to a single examiner or single center study [188,192], a single investigated ethnicity [189], absent external validation [192,193], poor image quality [186,190], and, most importantly, small sample sizes not sufficient enough to train the algorithm [184,185,187,188,192,193].

3.3.2. Breast Masses

Breast cancer represents the most common malignancy in women worldwide and its incidence still shows a rising tendency [195]. To address this health issue, screening programs and early diagnosis are of the utmost importance. While primary screening is often performed and recommend through mammography, the advantages of breast US are numerous. Especially in women with dense breast tissue, e.g., predominantly in young women or in Asian ethnicity, and for underserved areas, US diagnostics and screening are crucial [196].
This review includes eight articles on AI application in US imaging of the breast, all of which were published in the past three years and focused on 2D images.
All of the included studies worked on either the detection of breast lesions, classification, or both. Two studies used AI algorithms in combination with handheld US devices [197,198]. Berg et al. pointed out the importance of training for sonographers to obtain a reasonable image quality for AI analysis [197], while Huang et al. compared handheld US to robotically performed AI-assisted US and showed reduced costs, shorter examination times, and a higher detection rate in the latter [198]. The possibility of avoiding unnecessary breast biopsies was the result of two further studies, of which one used an AI-assisted multi-modal shear wave elastography model [199,200]. In a retrospective study, Dong et al. promoted the importance of an increased confidence in AI assistance in health care, which can be addressed by understanding the algorithm of the black box and encouraging the concept of ‘explainable AI’ [201]. Limitations to AI usage in breast US were the missing clinical context in unimodal approaches only focusing on image analysis [202], small data sets for algorithm training, and a lower accuracy in borderline findings [201].

3.3.3. Endometrium

In gynecologic US examinations, evaluation of the endometrium is part of normal routine and obtains its significance due to the frequency of endometrial abnormalities, e.g., endometrial fibroids, polyps, endometrial hyperplasia or atrophy, and carcinoma [203]. In particular, endometrial thickness is known to show dynamics in premenopausal women throughout the menstrual cycle, while an increase in thickness in postmenopausal women represents a risk factor for the presence of malignancy [204]. However, the identification of the endometrial−myometrial junction represents a challenging task due to heterogenous textures, irregular boundaries, and different sizes of the endometrium in the menstrual phases, which is why the application of AI in US is a field of research interest.
For this topic, five articles were extracted from the current literature. Publication years ranged from 2019 to 2023. Wang et al. and Zhao et al. conducted their studies based on 3D US images [205,206]. All but one study investigated the AI performance for the assessment of endometrial thickness, texture, or uterine adhesions [205,206,207,208]. Moro et al. aimed at establishing an AI model for risk stratification in endometrial cancer, but could not prove increased performance [209].
The application of AI US to assess endometrial characteristics showed a high accuracy and similar level of performance compared with human examiners [205,208], which could be further increased by setting human-selected key points in the images as a demarcation of the ROI for the AI algorithm [207]. The only two studies using 3D imaging outlined the superiority of this data to 2D imaging for its improved capability in identifying the endometrial−myometrial junction [205,206]. Extracted limiting aspects for AI application were reduced accuracy in assessing endometria smaller than 3 mm [208], operator-dependence, limited data for algorithm training [205,206], and the need for human experts selecting images before analysis [205,207,209].

3.3.4. Pelvic Floor

The assessment of pelvic floor dysfunction is a highly essential and sensitive topic in gynecological examination due to its consequences on women’s health-related quality of life. Transvaginal US is the preferred diagnostic method, enabling the assessment of pelvic organ integrity, dynamic of pelvic floor function during Valsalva maneuver, and diagnosis of pelvic organ prolapse.
This review includes six articles on the introduced topic, with only one being designed as a prospective, randomized-controlled clinical trial [210]. Publication years ranged from 2019 to 2023. Two studies used 3D US images [211,212], two 2D [213,214,215], two of them derived 2D images from a 3D/4D data set [214,215], and one did not specify the type of image [210].
The assessment of the pelvic floor muscles and measurement of pelvic anatomical landmarks were addressed in all studies, while two focused on the diagnosis of pelvic organ prolapse [212,213]. Reliable automated plane detection and measurements were obtained results in all of the studies. Three studies were able to show the significantly reduced time between manual and automatic image evaluation, from up to 15 minutes to 1.27 seconds [211,212,213], concluding in saved clinician’s time for better bedside patient care. Limiting aspects encompassed high operator dependency [211,212], homogeneity of data when exclusively using cases of affected women [212,213], and the need for manual selection of ROI before AI image processing [211,213,214].

3.3.5. Other Applications in Gynecology

Further fields of applications were found in the process of this literature review. In total, 11 articles were summarized in this section, including the topics of endometriosis [216,217], premature ovarian failure [218,219], uterine fibroids [220,221], follicle tracking [222,223], and ectopic pregnancies [224,225]. Another study addressed the issue of poor image quality in 3D US images due to data processing and showed that AI image enhancement methods could produce increased 3D image quality with user-preferential flexibility in both gynecological and obstetric US images [226]. The retrospective study of Huo et al. showed that AI-assisted US improved the accuracy of uterine fibroid assessment of young sonographers, but, summarized that AI applications rather assist than replace human observers [50].

Endometriosis

Two included articles discussed the sensitive topic of endometriosis, which can be problematic for both physician and patient due to complex clinical management and impaired quality of life in affected women [216,217]. Both studies had the usage of transvaginal 2D US videos and the missing histopathological or surgical confirmation in common, but focused on two different manifestations of endometriosis. Maicas et al. developed a highly accurate AI model for the classification of the pouch of Douglas obliteration as a cause of pelvic inflammation often seen in endometriosis via detection of the so-called ‘sliding sign’ [216]. In comparison, the results of Raimondo et al. showed a low sensitivity of the AI model to detect adenomyosis, but a high specificity, interpreted as a useful tool to rather exclude than detect adenomyosis [217].

Uterine Fibroids

In two of the included studies, the automated detection of uterine fibroids was analyzed. The retrospective study of Huo et al. showed that AI-assisted US improved the accuracy of uterine fibroid assessment of young sonographers, but, summarized that AI applications rather assist than replace human observers [220]. Yang et al. proposed an AI algorithm for the detection of fibroids, which facilitated pre-operative guidance and interventional therapy [221].

Premature Ovarian Failure

Premature ovarian failure or insufficiency is defined by the interruption of ovarian function before the onset of menopause, affects around 1% in women aged 40, and can cause amenorrhea or infertility [227]. Beside anamnesis and laboratory results on the hormone level, transvaginal US is the primary diagnostic tool to assess ovarian characteristics. This review lists two studies on this topic, evidencing that ovarian artery flow parameters obtained by AI analyzed color Doppler imaging can be used as a predictive factor, and both AI models showed reliability for disease prediction [210,218].

Follicle Tracking

In reproductive medicine, the evaluation of follicles after ovarian stimulation or the functional ovarian reserve in patients suffering from infertility is an important diagnostic component performed via US. Two included studies, of which one had a prospective, randomized-controlled design, showed increased accuracy of follicle evaluation and reduced examination time by using AI-assisted 2D and 3D US [222,223]. The mentioned limitations included cost-intensified AI-assisted machines and possible reduced image quality in obese patients [223].

Ectopic Pregnancy

In comparison with the use of AI in US for image analysis, two studies published an approach to use US images of ectopic pregnancies to build an ontology with a reference image collection for specific diagnostic signs (e.g., ‘ring of fire’). The prognosis of ectopic pregnancy is known to be dependent on the correctness and timing of diagnosis, for which the research groups Maurice et al. and Dhombres et al. showed that a knowledge base for US image annotations as a clinical decision support system based on this ontology significantly improved the timing of diagnosis [224,225].

4. Discussion

This systematic literature review presents an overview on applications for AI in US imaging in the medical field of OB/GYN. Relatively more publications were found to be suitable for inclusion that focused on applications in the field of obstetrics (148 versus 41 studies), possibly due to the predominance of US indications in this field. US is the preferred imaging method during pregnancy for fetal and maternal disorders for its low radiation exposure and possibility of real-time examination. In contrast with that, gynecological disorders such as different cancer entities and pelvis-related diseases benefit from other imaging methods such as MRI or CT. In the current literature, not only US, but also MRI applications profit from AI assistance, for example in fetal lung texture analysis [228,229] or cervical cancer diagnosis [230]. In the following, the benefits and limitations of AI application in OB/GYN US imaging are summarized.

4.1. Benefits

In general, AI in US imaging has the potential to reduce inter- and intra-observer variability by automating processes of image acquisition and interpretation [8]. AI-assisted US is able to significantly reduce examination time, showing decreased image acquisition times from minutes to seconds [211,212], thus, minimizing clinician’s workload [33] and enabling the sonographer to focus on the interpretation of the obtained images [140]. These advantages are of the utmost importance, especially in the clinical setting and in times of shortage of experienced health care personnel. In addition to this, AI models have been designed for image acquisition and classification, but also for facilitating or omitting repetitive work-intense tasks such as scan report production or captioning of US videos [140,159].
Not only clinicians profit from AI usage in US, but also patients, as AI helps to improve diagnostic accuracy and provides diagnostic safety. For example, the use of AI-assisted US has been shown to reduce the amount of unnecessary hospital admissions due to misdiagnosis and unnecessary breast biopsies [197,199,200]. This fact may reduce not only heath care costs, but, more importantly, diminish psychological burden for patients with unsecure diagnosis fearing the need for further diagnostic and intervention in inconclusive imaging results. In this context, AI in clinical settings can positively impact an individual patients’ life. Another example of direct patient benefit is the finding that in high-risk patients with ectopic pregnancies, reduced timing of diagnosis may result in an improved outcome [224]. AI models can also help to increase diagnostic accuracy, for example when US image quality is impeded by a thickened abdominal wall in obese patients [180]. Moreover, the advantages in pre-operative risk stratification or intraoperative assistance are described in both subspecialties of OB/GYN, e.g., in pre-operative endometrial cancer staging [209] and for fetoscopic surgical interventions [69,127]. Because of its reduction in examination time, AI-assisted US also has the potential to allow for cost-effective, population-based screening methods, e.g., for breast US [193,198,199]. Remarkably, when contextual clinical information is additionally incorporated in the AI model, the level of misclassification and misdiagnosis has been shown to be reduced [23,26]. To sum up, AI is not only a technical advantage when focusing on the imaging quality and accuracy, but, even more importantly, there is a clear benefit for an individual patient’s health care.
Nevertheless, AI-assisted US also helps to improve clinical education, which is well known to be neglected by a shortage of experienced clinicians and increased workload, especially in the recent pandemic times. It can support US novices in skill training and enables non-experts the acquisition of US images [160], e.g., for telehealth approaches in times of shortage of expert sonographers [50]. It, therefore, is of public health relevance, by reducing costs and the need for sonography experts [35,39]. In this framework, AI models are additionally able to enhance image acquisition and diagnostic accuracy in point-of-care US devices, which is of particular significance for application in low-resource settings and medically underserved areas [34,145,148,199].

4.2. Limitations

The main limitation of AI models in US imaging described in the summarized literature was the fact that most AI models still need experts for image acquisition, image or ROI selection to obtain an adequate image quality for accurate model performance [6,205,209,216], and for interpretation of the results [51,220]. In applications of tissue analysis such as assessment of the endometrium in gynecology [207,209] and fetal lung texture [162], or identification of the cervix [166] in obstetrics, manual selection of the ROI is still a limiting aspect in AI performance. These findings are in accordance with the often noted statement that AI models are primarily intended to assist the clinician, not to replace them [6,231]. This limitation is of major importance to discuss as it underlines the requirement for humans in performing, analyzing, supervising, interpreting, and taking clinical consequences of AI produced results.
The irreplaceable need for experts will be understandable when working out other limitations of AI usage. In pattern recognition tasks, some AI models can fail when subtle differences are diagnosis-relevant, e.g., in borderline findings or in small regions of interest such as endometrial thickness or fetal brain structures [95,201,208]. A change in US probe or modality may also lead to misclassification, e.g., when comparing abdominal or vaginal US images [185]. Furthermore, AI model performance in 2D and 3D US imaging can be limited due to imaging artefacts and noise, especially when automated tissue analysis is intended, for example, for fetal lung assessment, whereas the method of MRI for this specific application seems future-oriented and promising [127,163]. In the clinical setting, the assessment of fetal lungs in terms of texture and volume can be relevant for prenatal diagnosis, risk classification, prediction of prognosis, and therapy planning in fetal congenital diaphragmatic hernia, profiting from the combination of the imaging modalities of US and MRI [228,232].
In obstetric US, AI models designed for automatic biometric measurements are usually restraint to a specific range of GA and can fail in images of different GA [34,40,78]. Small structures such as the fetal limbs are prone to failing AI recognition [26,138,140], as well as the differentiation of structures within similar tissue textures [113]. Furthermore, real-time application is of particular importance in obstetric US and some authors noted the missing possibility for real-time application of various AI models, interestingly affecting especially those that are designed for intrapartum application [42,127,172,173,174]. This limitation may be due to the great use of computational power and memory of AI algorithms. One leading limitation of AI algorithms in obstetric US is the dependence on fetal position and movement. In fetal echocardiography in particular, most presented AI models have been trained with apical 4CV, ignoring the reality of heterogenous US images obtained from different scanning angles in clinical routine [57]. As a solution to this issue and an emerging research focus, the detection of the fetal heart as a ROI in US images can be performed by AI models [72,73].
However, not only the AI models itself, but also the study designs for model development and analysis summarized in this review bear some limitations that have an influence on model development and performance. As AI algorithms are usually dependent on large data sets for training, the detection of rare pathologies is limited due to missing training of pattern recognition models [132,133]. As most of the studies are performed with data obtained from healthy subjects or healthy fetuses, miscalculation or misdiagnosis may occur in case of pathologies [35,95,145]. Other factors based on study design that influence model performance are single study center, single observer or sonographer, single US device, small sample sizes, missing long-term data, and missing clinical validation.
Nevertheless, as perfectly outlined in the state-of-the-art review by Drukker et al., the clinical applicability of AI algorithms is still limited due to fears and concerns of clinicians regarding the safety or stability of the algorithms, trustworthiness, ethical background, privacy, and professional liability [3]. Where there is research about AI, it is also indispensable to mention ethical aspects of its application. The World Health Organization guidance for the ethics and governance of artificial intelligence for health states that it “recognizes that AI holds great promise for the practice of public health and medicine” [233], but also stresses the important aspect of ethical challenges, which must be addressed due to the fast-developing technologies. Drukker et al. stressed the importance of a better interdisciplinary research on AI applications of technicians and clinicians to reduce the difficulties and insecurities of clinicians when facing the complex methods of AI systems resulting in missing trust in these systems [3]. This aspect is particularly addressed by the concept of ‘explainable AI’, which is used in the studies of Sakai et al. and Dong et al. [7,201]. To sum up, limitations in the applications of AI algorithms are abundant, especially because most study settings seem inadequate for the evaluation of clinical applicability. Considering the fact that the technique of AI and its emerging systems is relatively new in the medical field, it is comprehensible that clinical approved results are missing.

4.3. Strengths and Limitations of This Review

One important advantage of this review is the inclusion of a reasonable number of publications over an extensive period of time, with no restrictions regarding year of publication. The included literature is categorized among their subspecialty and research topics, allowing for a visualized overview of the current research interest on the one hand, as well as an idea of still underrepresented tasks for AI applications in further research on the other hand.
Regarding the distribution of literature among the subspecialties of OB/GYN, one limiting aspect of this study may be the search query containing the extra keyword ‘pregnancy’, which is likely to have influenced a discrepancy in the obtained records in favor of obstetric studies. Another important limiting aspect might be the fact that this review includes research articles from engineering literature, which are known to have a technical viewpoint and fail to assess clinical applicability. Most applications of AI US imaging in these technical articles are still experimental and preliminary work and have not been sufficiently assessed for clinical applicability, which was also stated in the review of Dhombres et al. [234]. As these technical studies are developed by engineers, they are difficult to understand for clinicians, bringing up the discussion about the urgent need for an improved interface between AI specialists and clinicians applying AI technology in real-life scenarios [231]. Lastly, a classification of the presented AI applications in the technologic subcategories of regression modeling, population classification, and image segmentation would be of further interest and should be considered for further research. In the realm of regression modeling, AI algorithms can predict crucial parameters, such as fetal growth, aiding clinicians in identifying potential complications early on. Moreover, AI-driven classification systems can enhance the accuracy of diagnoses, ensuring a higher level of precision in identifying abnormalities or diseases. The segmentation applications of AI can assess the way organs and structures are delineated in ultrasound images, offering accuracy in complex anatomical analyses.

5. Conclusions

Applications for AI-assisted US widely range from fetal biometry, echocardiography, neurosonography, or the estimation of gestational age in obstetrics, to the identification of adnexal or breast masses and the assessment of endometrium or pelvic floor in gynecology. The applications for AI-assisted US in OB/GYN are especially numerous in the subspecialty of obstetrics, where the imaging method of US is of particular significance. However, while most studies are of technical nature and studies are designed by AI engineers, most of presented literature lack clinical applicability. This systematic literature review displays the variety of research topics on AI applications in US imaging in OB/GYN, including sparsely represented and potentially emerging topics for further research.
In conclusion, with abundant evidence, we can pronounce to live and evolve the era of 5D ultrasound, as AI algorithms add and will add a momentous further dimension to the existing US imaging methods in OB/GYN.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jcm12216833/s1, Table S1: Overview of included literature on artificial intelligence applications in ultrasound for the subspecialty of obstetrics; Table S2: Overview of included literature on artificial intelligence applications in ultrasound for the subspecialty of gynecology.

Author Contributions

Conceptualization, F.R. and E.J.; methodology, E.J.; validation, F.R., E.J.; formal analysis, F.R. and E.J.; investigation, E.J.; resources, S.A. and J.J.C.; data curation, E.J. and P.K.; writing—original draft preparation, E.J.; writing—review and editing, F.R, U.G., B.S., and S.A.; visualization, E.J.; supervision, F.R.; project administration, F.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Open Access Publication Fund of the University of Bonn.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in the article or Supplementary Material here.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

2/3/4/5DTwo/three/four/five-dimensional
4CVFour-chamber view
ACAbdominal circumference
AIArtificial intelligence
CHDCongenital heart disease
CRLCrown-rump-length
CTComputed tomography
FINEFetal intelligent navigation echocardiography
GAGestational age
HCHead circumference
ISUOGInternational Society of Ultrasound in Obstetrics & Gynecology
MRIMagnetic resonance imaging
NTNuchal translucency
OB/GYNObstetrics and gynecology
ROIRegion of interest
USUltrasound

References

  1. Shen, Y.T.; Chen, L.; Yue, W.W.; Xu, H.X. Artificial intelligence in ultrasound. Eur. J. Radiol. 2021, 139, 109717. [Google Scholar] [CrossRef] [PubMed]
  2. U.S. Food and Drug Administration. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices (accessed on 21 August 2023).
  3. Drukker, L.; Noble, J.A.; Papageorghiou, A.T. Introduction to artificial intelligence in ultrasound imaging in obstetrics and gynecology. Ultrasound Obstet. Gynecol. 2020, 56, 498–505. [Google Scholar] [CrossRef] [PubMed]
  4. Diniz, P.H.B.; Yin, Y.; Collins, S. Deep Learning Strategies for Ultrasound in Pregnancy. EMJ Reprod. Health 2020, 6, 73–80. [Google Scholar] [CrossRef]
  5. Reddy, C.D.; van den Eynde, J.; Kutty, S. Artificial intelligence in perinatal diagnosis and management of congenital heart disease. Semin. Perinatol. 2022, 46, 151588. [Google Scholar] [CrossRef] [PubMed]
  6. Xie, H.N.; Wang, N.; He, M.; Zhang, L.H.; Cai, H.M.; Xian, J.B.; Lin, M.F.; Zheng, J.; Yang, Y.Z. Using deep-learning algorithms to classify fetal brain ultrasound images as normal or abnormal. Ultrasound Obstet. Gynecol. 2020, 56, 579–587. [Google Scholar] [CrossRef] [PubMed]
  7. Sakai, A.; Komatsu, M.; Komatsu, R.; Matsuoka, R.; Yasutomi, S.; Dozen, A.; Shozu, K.; Arakaki, T.; Machino, H.; Asada, K.; et al. Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening. Biomedicines 2022, 10, 551. [Google Scholar] [CrossRef] [PubMed]
  8. Sarno, L.; Neola, D.; Carbone, L.; Saccone, G.; Carlea, A.; Miceli, M.; Iorio, G.G.; Mappa, I.; Rizzo, G.; Di Girolamo, R.; et al. Use of artificial intelligence in obstetrics: Not quite ready for prime time. Am. J. Obstet. Gynecol. MFM 2023, 5, 100792. [Google Scholar] [CrossRef]
  9. Leung, K.-Y. Applications of Advanced Ultrasound Technology in Obstetrics. Diagnostics 2021, 11, 1217. Available online: https://pubmed.ncbi.nlm.nih.gov/34359300/ (accessed on 21 August 2023). [CrossRef]
  10. Rizzo, G.; Aiello, E.; Elena Pietrolucci, M.; Arduini, D. The feasibility of using 5D CNS software in obtaining standard fetal head measurements from volumes acquired by three-dimensional ultrasonography: Comparison with two-dimensional ultrasound. J. Matern.-Fetal Neonatal Med. 2016, 29, 2217–2222. [Google Scholar] [CrossRef]
  11. Deshmukh, N.P.; Caban, J.J.; Taylor, R.H.; Hager, G.D.; Boctor, E.M. Five-dimensional ultrasound system for soft tissue visualization. Int. J. Comput. Assist. Radiol. Surg. 2015, 10, 1927–1939. [Google Scholar] [CrossRef]
  12. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Moher, D. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 29, n71. [Google Scholar] [CrossRef] [PubMed]
  13. Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gotzsche, P.C.; Ioannidis, J.P.A.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D. The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Healthcare Interventions: Explanation and Elaboration. BMJ 2009, 339, b2700. [Google Scholar] [CrossRef] [PubMed]
  14. Hadlock, F.P.; Harrist, R.B.; Sharman, R.S.; Deter, R.L.; Park, S.K. Estimation of fetal weight with the use of head, body, and femur measurements—A prospective study. Am. J. Obstet. Gynecol. 1985, 151, 333–337. [Google Scholar] [CrossRef] [PubMed]
  15. Baumgartner, C.F.; Kamnitsas, K.; Matthew, J.; Fletcher, T.P.; Smith, S.; Koch, L.M.; Kainz, B.; Rueckert, D. SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound. IEEE Trans. Med. Imaging 2017, 36, 2204–2215. [Google Scholar] [CrossRef] [PubMed]
  16. Sarris, I.; Ioannou, C.; Chamberlain, P.; Ohuma, E.; Roseman, F.; Hoch, L.; Altman, D.G.; Papageorghiou, A.T.; International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st). Intra- and interobserver variability in fetal ultrasound measurements. Ultrasound Obstet. Gynecol. 2012, 39, 266–273. [Google Scholar] [CrossRef] [PubMed]
  17. Sendra-Balcells, C.; Campello, V.M.; Torrents-Barrena, J.; Ahmed, Y.A.; Elattar, M.; Ohene-Botwe, B.; Nyangulu, P.; Stones, W.; Ammar, M.; Benamer, L.N.; et al. Generalisability of fetal ultrasound deep learning models to low-resource imaging settings in five African countries. Sci. Rep. 2023, 13, 2728. [Google Scholar] [CrossRef] [PubMed]
  18. Zeng, W.; Luo, J.; Cheng, J.; Lu, Y. Efficient fetal ultrasound image segmentation for automatic head circumference measurement using a lightweight deep convolutional neural network. Med. Phys. 2022, 49, 5081–5092. [Google Scholar] [CrossRef] [PubMed]
  19. Dashe, J.S.; McIntire, D.D.; Twickler, D.M. Effect of Maternal Obesity on the Ultrasound Detection of Anomalous Fetuses. Obstet. Gynecol. 2009, 113, 1001–1007. [Google Scholar] [CrossRef]
  20. Song, J.; Liu, J.; Liu, L.; Jiang, Y.; Zheng, H.; Ke, H.; Yang, L.; Zhang, Z. The birth weight of macrosomia influence the accuracy of ultrasound estimation of fetal weight at term. J. Clin. Ultrasound 2022, 50, 967–973. [Google Scholar] [CrossRef]
  21. Jang, J.; Park, Y.; Kim, B.; Lee, S.M.; Kwon, J.-Y.; Seo, J.K. Automatic Estimation of Fetal Abdominal Circumference from Ultrasound Images. IEEE J. Biomed. Health Inform. 2018, 22, 1512–1520. [Google Scholar] [CrossRef]
  22. Grandjean, G.A.; Hossu, G.; Bertholdt, C.; Noble, P.; Morel, O.; Grangé, G. Artificial intelligence assistance for fetal head biometry: Assessment of automated measurement software. Diagn. Interv. Imaging 2018, 99, 709–716. [Google Scholar] [CrossRef] [PubMed]
  23. Pluym, I.D.; Afshar, Y.; Holliman, K.; Kwan, L.; Bolagani, A.; Mok, T.; Silver, B.; Ramirez, E.; Han, C.S.; Platt, L.D. Accuracy of automated three-dimensional ultrasound imaging technique for fetal head biometry. Ultrasound Obstet. Gynecol. 2021, 57, 798–803. [Google Scholar] [CrossRef] [PubMed]
  24. Yang, X.; Dou, H.; Huang, R.; Xue, W.; Huang, Y.; Qian, J.; Zhang, Y.; Luo, H.; Guo, H.; Wang, T.; et al. Agent with Warm Start and Adaptive Dynamic Termination for Plane Localization in 3D Ultrasound. IEEE Trans. Med. Imaging 2021, 40, 1950–1961. [Google Scholar] [CrossRef] [PubMed]
  25. Płotka, S.; Klasa, A.; Lisowska, A.; Seliga-Siwecka, J.; Lipa, M.; Trzciński, T.; Sitek, A. Deep learning fetal ultrasound video model match human observers in biometric measurements. Phys. Med. Biol. 2022, 67, 045013. [Google Scholar] [CrossRef] [PubMed]
  26. Sridar, P.; Kumar, A.; Quinton, A.; Nanan, R.; Kim, J.; Krishnakumar, R. Decision Fusion-Based Fetal Ultrasound Image Plane Classification Using Convolutional Neural Networks. Ultrasound Med. Biol. 2019, 45, 1259–1273. [Google Scholar] [CrossRef]
  27. Burgos-Artizzu, X.P.; Coronado-Gutiérrez, D.; Valenzuela-Alcaraz, B.; Bonet-Carne, E.; Eixarch, E.; Crispi, F.; Gratacos, E. Evaluation of deep convolutional neural networks for automatic classification of common maternal fetal ultrasound planes. Sci. Rep. 2020, 10, 10200. [Google Scholar] [CrossRef]
  28. Chen, H.; Wu, L.; Dou, Q.; Qin, J.; Li, S.; Cheng, J.-Z.; Ni, D.; Heng, P.-A. Ultrasound Standard Plane Detection Using a Composite Neural Network Framework. IEEE Trans. Cybern. 2017, 47, 1576–1586. [Google Scholar] [CrossRef]
  29. Zhang, B.; Liu, H.; Luo, H.; Li, K. Automatic quality assessment for 2D fetal sonographic standard plane based on multitask learning. Medicine 2021, 100, e24427. [Google Scholar] [CrossRef]
  30. Rahman, R.; Alam, M.d.G.R.; Reza, M.d.T.; Huq, A.; Jeon, G.; Uddin, M.d.Z.; Hassan, M.M. Demystifying evidential Dempster Shafer-based CNN architecture for fetal plane detection from 2D ultrasound images leveraging fuzzy-contrast enhancement and explainable AI. Ultrasonics 2023, 132, 107017. [Google Scholar] [CrossRef]
  31. Carneiro, G.; Georgescu, B.; Good, S.; Comaniciu, D. Automatic Fetal Measurements in Ultrasound Using Constrained Probabilistic Boosting Tree. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2007, Proceedings of the 10th International Conference, Brisbane, Australia, 29 October–2 November 2007; Ayache, N., Ourselin, S., Maeder, A., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4792, pp. 571–579. Available online: http://link.springer.com/10.1007/978-3-540-75759-7_69 (accessed on 9 June 2023).
  32. Carneiro, G.; Georgescu, B.; Good, S.; Comaniciu, D. Detection and Measurement of Fetal Anatomies from Ultrasound Images using a Constrained Probabilistic Boosting Tree. IEEE Trans. Med. Imaging 2008, 27, 1342–1355. [Google Scholar] [CrossRef]
  33. Luo, D.; Wen, H.; Peng, G.; Lin, Y.; Liang, M.; Liao, Y.; Qin, Y.; Zeng, Q.; Dang, J.; Li, S. A Prenatal Ultrasound Scanning Approach: One-Touch Technique in Second and Third Trimesters. Ultrasound Med. Biol. 2021, 47, 2258–2265. [Google Scholar] [CrossRef] [PubMed]
  34. Zeng, Y.; Tsui, P.-H.; Wu, W.; Zhou, Z.; Wu, S. Fetal Ultrasound Image Segmentation for Automatic Head Circumference Biometry Using Deeply Supervised Attention-Gated V-Net. J. Digit. Imaging 2021, 34, 134–148. [Google Scholar] [CrossRef] [PubMed]
  35. Heuvel, T.L.v.D.; Petros, H.; Santini, S.; de Korte, C.L.; van Ginneken, B. Automated Fetal Head Detection and Circumference Estimation from Free-Hand Ultrasound Sweeps Using Deep Learning in Resource-Limited Countries. Ultrasound Med. Biol. 2019, 45, 773–785. [Google Scholar] [CrossRef] [PubMed]
  36. Li, J.; Wang, Y.; Lei, B.; Cheng, J.-Z.; Qin, J.; Wang, T.; Li, S.; Ni, D. Automatic Fetal Head Circumference Measurement in Ultrasound Using Random Forest and Fast Ellipse Fitting. IEEE J. Biomed. Health Inform. 2018, 22, 215–223. [Google Scholar] [CrossRef] [PubMed]
  37. Yang, C.; Yang, Z.; Liao, S.; Guo, J.; Yin, S.; Liu, C.; Kang, Y. A new approach to automatic measure fetal head circumference in ultrasound images using convolutional neural networks. Comput. Biol. Med. 2022, 147, 105801. [Google Scholar] [CrossRef] [PubMed]
  38. Zhang, J.; Petitjean, C.; Ainouz, S. Segmentation-Based vs. Regression-Based Biomarker Estimation: A Case Study of Fetus Head Circumference Assessment from Ultrasound Images. J. Imaging 2022, 8, 23. [Google Scholar] [CrossRef] [PubMed]
  39. Arroyo, J.; Marini, T.J.; Saavedra, A.C.; Toscano, M.; Baran, T.M.; Drennan, K.; Dozier, A.; Zhao, Y.T.; Egoavil, M.; Tamayo, L.; et al. No sonographer, no radiologist: New system for automatic prenatal detection of fetal biometry, fetal presentation, and placental location. PLoS ONE 2022, 17, e0262107. [Google Scholar] [CrossRef]
  40. Li, P.; Zhao, H.; Liu, P.; Cao, F. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images. Med. Biol. Eng. Comput. 2020, 58, 2879–2892. [Google Scholar] [CrossRef]
  41. Kim, B.; Kim, K.C.; Park, Y.; Kwon, J.-Y.; Jang, J.; Seo, J.K. Machine-learning-based automatic identification of fetal abdominal circumference from ultrasound images. Physiol. Meas. 2018, 39, 105007. [Google Scholar] [CrossRef]
  42. Ni, D.; Yang, X.; Chen, X.; Chin, C.-T.; Chen, S.; Heng, P.A.; Li, S.; Qin, J.; Wang, T. Standard Plane Localization in Ultrasound by Radial Component Model and Selective Search. Ultrasound Med. Biol. 2014, 40, 2728–2742. [Google Scholar] [CrossRef]
  43. Chen, H.; Ni, D.; Qin, J.; Li, S.; Yang, X.; Wang, T.; Heng, P.A. Standard Plane Localization in Fetal Ultrasound via Domain Transferred Deep Neural Networks. IEEE J. Biomed. Health Inform. 2015, 19, 1627–1636. [Google Scholar] [CrossRef] [PubMed]
  44. Zhu, F.; Liu, M.; Wang, F.; Qiu, D.; Li, R.; Dai, C. Automatic measurement of fetal femur length in ultrasound images: A comparison of random forest regression model and SegNet. Math. Biosci. Eng. 2021, 18, 7790–7805. [Google Scholar] [CrossRef] [PubMed]
  45. Van Den Heuvel, T.L.A.; De Bruijn, D.; De Korte, C.L.; Ginneken, B.V. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS ONE 2018, 13, e0200412. [Google Scholar] [CrossRef] [PubMed]
  46. Donofrio, M.T.; Moon-Grady, A.J.; Hornberger, L.K.; Copel, J.A.; Sklansky, M.S.; Abuhamad, A.; Cuneao, B.F.; Huhta, J.C.; Jonas, R.A.; Krishnan, A.; et al. Diagnosis and Treatment of Fetal Cardiac Disease: A Scientific Statement from the American Heart Association. Circulation 2014, 129, 2183–2242. [Google Scholar] [CrossRef] [PubMed]
  47. Gembruch, U. Prenatal diagnosis of congenital heart disease. Prenat. Diagn. 1997, 17, 1283–1298. [Google Scholar] [CrossRef]
  48. Carvalho, J.; Allan, L.; Chaoui, R.; Copel, J.; DeVore, G.; Hecher, K.; Lee, W.; Munoz, H.; Paladini, D.; Tutschek, B.; et al. ISUOG Practice Guidelines (updated): Sonographic screening examination of the fetal heart. Ultrasound Obstet. Gynecol. 2013, 41, 348–359. [Google Scholar] [CrossRef] [PubMed]
  49. Bensemlali, M.; Bajolle, F.; Laux, D.; Parisot, P.; Ladouceur, M.; Fermont, L.; Levy, M.; Le Bidois, J.; Raimondi, F.; Ville, Y.; et al. Neonatal management and outcomes of prenatally diagnosed CHDs. Cardiol. Young 2017, 27, 344–353. [Google Scholar] [CrossRef] [PubMed]
  50. Arnaout, R.; Curran, L.; Zhao, Y.; Levine, J.C.; Chinn, E.; Moon-Grady, A.J. An ensemble of neural networks provides expert-level prenatal detection of complex congenital heart disease. Nat. Med. 2021, 27, 882–891. [Google Scholar] [CrossRef]
  51. Yeo, L.; Romero, R. Fetal Intelligent Navigation Echocardiography (FINE): A novel method for rapid, simple, and automatic examination of the fetal heart: Fetal intelligent navigation echocardiography (FINE). Ultrasound Obstet. Gynecol. 2013, 42, 268–284. [Google Scholar] [CrossRef]
  52. Gembicki, M.; Hartge, D.R.; Dracopoulos, C.; Weichert, J. Semiautomatic Fetal Intelligent Navigation Echocardiography Has the Potential to Aid Cardiac Evaluations Even in Less Experienced Hands. J. Ultrasound Med. 2020, 39, 301–309. [Google Scholar] [CrossRef]
  53. Ma, M.; Li, Y.; Chen, R.; Huang, C.; Mao, Y.; Zhao, B. Diagnostic performance of fetal intelligent navigation echocardiography (FINE) in fetuses with double-outlet right ventricle (DORV). Int. J. Cardiovasc. Imaging 2020, 36, 2165–2172. [Google Scholar] [CrossRef] [PubMed]
  54. Veronese, P.; Guariento, A.; Cattapan, C.; Fedrigo, M.; Gervasi, M.T.; Angelini, A.; Riva, A.; Vida, V. Prenatal Diagnosis and Fetopsy Validation of Complete Atrioventricular Septal Defects Using the Fetal Intelligent Navigation Echocardiography Method. Diagnostics 2023, 13, 456. [Google Scholar] [CrossRef] [PubMed]
  55. Dozen, A.; Komatsu, M.; Sakai, A.; Komatsu, R.; Shozu, K.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Image Segmentation of the Ventricular Septum in Fetal Cardiac Ultrasound Videos Based on Deep Learning Using Time-Series Information. Biomolecules 2020, 10, 1526. [Google Scholar] [CrossRef] [PubMed]
  56. Han, G.; Jin, T.; Zhang, L.; Guo, C.; Gui, H.; Na, R.; Wang, X.; Bai, H. Adoption of Compound Echocardiography under Artificial Intelligence Algorithm in Fetal Congenial Heart Disease Screening during Gestation. Appl. Bionics Biomech. 2022, 2022, 6410103. [Google Scholar] [CrossRef] [PubMed]
  57. Xu, L.; Liu, M.; Shen, Z.; Wang, H.; Liu, X.; Wang, X.; Wang, S.; Li, T.; Yu, S.; Hou, M.; et al. DW-Net: A cascaded convolutional neural network for apical four-chamber view segmentation in fetal echocardiography. Comput. Med. Imaging Graph. 2020, 80, 101690. [Google Scholar] [CrossRef]
  58. Wu, H.; Wu, B.; Lai, F.; Liu, P.; Lyu, G.; He, S.; Dai, J. Application of Artificial Intelligence in Anatomical Structure Recognition of Standard Section of Fetal Heart. Comput. Math. Methods Med. 2023, 2023, 5650378. [Google Scholar] [CrossRef]
  59. Yang, Y.; Wu, B.; Wu, H.; Xu, W.; Lyu, G.; Liu, P.; He, S. Classification of normal and abnormal fetal heart ultrasound images and identification of ventricular septal defects based on deep learning. J. Perinat. Med. 2023, 51, 1052–1058. [Google Scholar] [CrossRef] [PubMed]
  60. Nurmaini, S.; Rachmatullah, M.N.; Sapitri, A.I.; Darmawahyuni, A.; Tutuko, B.; Firdaus, F.; Partan, R.U.; Bernolian, N. Deep Learning-Based Computer-Aided Fetal Echocardiography: Application to Heart Standard View Segmentation for Congenital Heart Defects Detection. Sensors 2021, 21, 8007. [Google Scholar] [CrossRef]
  61. Gong, Y.; Zhang, Y.; Zhu, H.; Lv, J.; Cheng, Q.; Zhang, H.; He, Y.; Wang, S. Fetal Congenital Heart Disease Echocardiogram Screening Based on DGACNN: Adversarial One-Class Classification Combined with Video Transfer Learning. IEEE Trans. Med. Imaging 2020, 39, 1206–1222. [Google Scholar] [CrossRef]
  62. Nurmaini, S.; Partan, R.U.; Bernolian, N.; Sapitri, A.I.; Tutuko, B.; Rachmatullah, M.N.; Darmawahyuni, A.; Firdaus, F.; Mose, J.C. Deep Learning for Improving the Effectiveness of Routine Prenatal Screening for Major Congenital Heart Diseases. J. Clin. Med. 2022, 11, 6454. [Google Scholar] [CrossRef]
  63. Wang, X.; Yang, T.; Zhang, Y.; Liu, X.; Zhang, Y.; Sun, L.; Gu, X.; Chen, Z.; Guo, Y.; Xue, C.; et al. Diagnosis of fetal total anomalous pulmonary venous connection based on the post-left atrium space ratio using artificial intelligence. Prenat. Diagn. 2022, 42, 1323–1331. [Google Scholar] [CrossRef]
  64. Yu, L.; Guo, Y.; Wang, Y.; Yu, J.; Chen, P. Determination of Fetal Left Ventricular Volume Based on Two-Dimensional Echocardiography. J. Healthc. Eng. 2017, 2017, 4797315. [Google Scholar] [CrossRef] [PubMed]
  65. Herling, L.; Johnson, J.; Ferm-Widlund, K.; Zamprakou, A.; Westgren, M.; Acharya, G. Automated quantitative evaluation of fetal atrioventricular annular plane systolic excursion. Ultrasound Obstet. Gynecol. 2021, 58, 853–863. [Google Scholar] [CrossRef] [PubMed]
  66. Scharf, J.L.; Dracopoulos, C.; Gembicki, M.; Welp, A.; Weichert, J. How Automated Techniques Ease Functional Assessment of the Fetal Heart: Applicability of MPI+TM for Direct Quantification of the Modified Myocardial Performance Index. Diagnostics 2023, 13, 1705. [Google Scholar] [CrossRef] [PubMed]
  67. Qiao, S.; Pan, S.; Luo, G.; Pang, S.; Chen, T.; Singh, A.K.; Lv, Z. A Pseudo-Siamese Feature Fusion Generative Adversarial Network for Synthesizing High-Quality Fetal Four-Chamber Views. IEEE J. Biomed. Health Informatics 2023, 27, 1193–1204. [Google Scholar] [CrossRef] [PubMed]
  68. Patra, A.; Noble, J.A. Hierarchical Class Incremental Learning of Anatomical Structures in Fetal Echocardiography Videos. IEEE J. Biomed. Health Informatics 2020, 24, 1046–1058. [Google Scholar] [CrossRef] [PubMed]
  69. Emery, S.P.; Kreutzer, J.; Sherman, F.R.; Fujimoto, K.L.; Jaramaz, B.; Nikou, C.; Tobita, K.; Keller, B.B. Computer-assisted navigation applied to fetal cardiac intervention. Int. J. Med. Robot. Comput. Assist. Surg. 2007, 3, 187–198. [Google Scholar] [CrossRef]
  70. Dong, J.; Liu, S.; Liao, Y.; Wen, H.; Lei, B.; Li, S.; Wang, T. A Generic Quality Control Framework for Fetal Ultrasound Cardiac Four-Chamber Planes. IEEE J. Biomed. Health Inform. 2020, 24, 931–942. [Google Scholar] [CrossRef] [PubMed]
  71. Pietrolucci, M.E.; Maqina, P.; Mappa, I.; Marra, M.C.; D’ Antonio, F.; Rizzo, G. Evaluation of an artificial intelligent algorithm (HeartassistTM) to automatically assess the quality of second trimester cardiac views: A prospective study. J. Perinat. Med. 2023, 51, 920–924. [Google Scholar] [CrossRef]
  72. Vijayalakshmi, S.; Sriraam, N.; Suresh, S.; Muttan, S. Automated region mask for four-chamber fetal heart biometry. J. Clin. Monit. Comput. 2013, 27, 205–209. [Google Scholar] [CrossRef]
  73. Sriraam, N.; Punyaprabha, V.; Sushma Tv Suresh, S. Performance evaluation of computer-aided automated master frame selection techniques for fetal echocardiography. Med. Biol. Eng. Comput. 2023, 61, 1723–1744. [Google Scholar] [CrossRef] [PubMed]
  74. Graupner, O.; Enzensberger, C.; Wieg, L.; Willruth, A.; Steinhard, J.; Gembruch, U.; Doelle, A.; Bahlmann, F.; Kawecki, A.; Degenhardt, J.; et al. Evaluation of right ventricular function in fetal hypoplastic left heart syndrome by color tissue Doppler imaging: Right ventricular function in fetal HLHS. Ultrasound Obstet. Gynecol. 2016, 47, 732–738. [Google Scholar] [CrossRef] [PubMed]
  75. Sun, L.; Wang, J.; Su, X.; Chen, X.; Zhou, Y.; Zhang, X.; Lu, H.; Niu, J.; Yu, L.; Sun, C.; et al. Reference ranges of fetal heart function using a Modified Myocardial Performance Index: A prospective multicentre, cross-sectional study. BMJ Open 2021, 11, e049640. [Google Scholar] [CrossRef] [PubMed]
  76. Lane, E.S.; Jevsikov, J.; Shun-Shin, M.J.; Dhutia, N.; Matoorian, N.; Cole, G.D.; Francis, D.P.; Zolgharni, M. Automated multi-beat tissue Doppler echocardiography analysis using deep neural networks. Med. Biol. Eng. Comput. 2023, 61, 911–926. [Google Scholar] [CrossRef] [PubMed]
  77. Sonographic examination of the fetal central nervous system: Guidelines for performing the ‘basic examination’ and the ‘fetal neurosonogram’. Ultrasound Obstet. Gynecol. 2007, 29, 109–116. [CrossRef] [PubMed]
  78. Yeung, P.-H.; Aliasi, M.; Papageorghiou, A.T.; Haak, M.; Xie, W.; Namburete, A.I. Learning to map 2D ultrasound images into 3D space with minimal human annotation. Med. Image Anal. 2021, 70, 101998. [Google Scholar] [CrossRef] [PubMed]
  79. Hesse, L.S.; Aliasi, M.; Moser, F.; INTERGROWTH-21(st) Consortium; Haak, M.C.; Xie, W.; Jenkinson, M.; Namburete, A.I. Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning. NeuroImage 2022, 254, 119117. [Google Scholar] [CrossRef] [PubMed]
  80. Namburete, A.I.; Xie, W.; Yaqub, M.; Zisserman, A.; Noble, J.A. Fully-automated alignment of 3D fetal brain ultrasound to a canonical reference space using multi-task learning. Med. Image Anal. 2018, 46, 1–14. [Google Scholar] [CrossRef]
  81. Papageorghiou, A.T.; Ohuma, E.O.; Altman, D.G.; Todros, T.; Ismail, L.C.; Lambert, A.; Jaffer, Y.A.; Bertino, E.; Gravett, M.G.; Purwar, M.; et al. International standards for fetal growth based on serial ultrasound measurements: The Fetal Growth Longitudinal Study of the INTERGROWTH-21st Project. Lancet 2014, 384, 869–879. [Google Scholar] [CrossRef]
  82. Di Vece, C.; Dromey, B.; Vasconcelos, F.; David, A.L.; Peebles, D.; Stoyanov, D. Deep learning-based plane pose regression in obstetric ultrasound. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 833–839. [Google Scholar] [CrossRef]
  83. Yu, Y.; Chen, Z.; Zhuang, Y.; Yi, H.; Han, L.; Chen, K.; Lin, J. A guiding approach of Ultrasound scan for accurately obtaining standard diagnostic planes of fetal brain malformation. J. X-ray Sci. Technol. 2022, 30, 1243–1260. [Google Scholar] [CrossRef]
  84. Xu, Y.; Lee, L.H.; Drukker, L.; Yaqub, M.; Papageorghiou, A.T.; Noble, A.J. Simulating realistic fetal neurosonography images with appearance and growth change using cycle-consistent adversarial networks and an evaluation. J. Med. Imaging 2020, 7, 057001. [Google Scholar] [CrossRef] [PubMed]
  85. Lin, Q.; Zhou, Y.; Shi, S.; Zhang, Y.; Yin, S.; Liu, X.; Peng, Q.; Huang, S.; Jiang, Y.; Cui, C.; et al. How much can AI see in early pregnancy: A multi-center study of fetus head characterization in week 10–14 in ultrasound using deep learning. Comput. Methods Programs Biomed. 2022, 226, 107170. [Google Scholar] [CrossRef]
  86. Lin, M.; He, X.; Guo, H.; He, M.; Zhang, L.; Xian, J.; Lei, T.; Xu, Q.; Zheng, J.; Feng, J.; et al. Use of real-time artificial intelligence in detection of abnormal image patterns in standard sonographic reference planes in screening for fetal intracranial malformations. Ultrasound Obstet. Gynecol. 2022, 59, 304–316. [Google Scholar] [CrossRef] [PubMed]
  87. Alansary, A.; Oktay, O.; Li, Y.; Le Folgoc, L.; Hou, B.; Vaillant, G.; Kamnitsas, K.; Vlontzos, A.; Glocker, B.; Kainz, B.; et al. Evaluating reinforcement learning agents for anatomical landmark detection. Med. Image Anal. 2019, 53, 156–164. [Google Scholar] [CrossRef]
  88. Gofer, S.; Haik, O.; Bardin, R.; Gilboa, Y.; Perlman, S. Machine Learning Algorithms for Classification of First-Trimester Fetal Brain Ultrasound Images. J. Ultrasound Med. 2022, 41, 1773–1779. [Google Scholar] [CrossRef] [PubMed]
  89. Huang, R.; Xie, W.; Noble, J.A. VP-Nets: Efficient automatic localization of key brain structures in 3D fetal neurosonography. Med. Image Anal. 2018, 47, 127–139. [Google Scholar] [CrossRef]
  90. Lin, Z.; Li, S.; Ni, D.; Liao, Y.; Wen, H.; Du, J.; Chen, S.; Wang, T.; Lei, B. Multi-task learning for quality assessment of fetal head ultrasound images. Med. Image Anal. 2019, 58, 101548. [Google Scholar] [CrossRef]
  91. Yaqub, M.; Kelly, B.; Papageorghiou, A.T.; Noble, J.A. A Deep Learning Solution for Automatic Fetal Neurosonographic Diagnostic Plane Verification Using Clinical Standard Constraints. Ultrasound Med. Biol. 2017, 43, 2925–2933. [Google Scholar] [CrossRef]
  92. Skelton, E.; Matthew, J.; Li, Y.; Khanal, B.; Martinez, J.C.; Toussaint, N.; Gupta, C.; Knight, C.; Kainz, B.; Hajnal, J.; et al. Towards automated extraction of 2D standard fetal head planes from 3D ultrasound acquisitions: A clinical evaluation and quality assessment comparison. Radiography 2021, 27, 519–526. [Google Scholar] [CrossRef]
  93. Xie, B.; Lei, T.; Wang, N.; Cai, H.; Xian, J.; He, M.; Zhang, L.; Xie, H. Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1303–1312. [Google Scholar] [CrossRef] [PubMed]
  94. Sahli, H.; Mouelhi, A.; Ben Slama, A.; Sayadi, M.; Rachdi, R. Supervised classification approach of biometric measures for automatic fetal defect screening in head ultrasound images. J. Med. Eng. Technol. 2019, 43, 279–286. [Google Scholar] [CrossRef] [PubMed]
  95. Burgos-Artizzu, X.P.; Coronado-Gutiérrez, D.; Valenzuela-Alcaraz, B.; Vellvé, K.; Eixarch, E.; Crispi, F.; Bonet-Carne, E.; Bennasar, M.; Gratacos, E. Analysis of maturation features in fetal brain ultrasound via artificial intelligence for the estimation of gestational age. Am. J. Obstet. Gynecol. MFM 2021, 3, 100462. [Google Scholar] [CrossRef] [PubMed]
  96. Sreelakshmy, R.; Titus, A.; Sasirekha, N.; Logashanmugam, E.; Begam, R.B.; Ramkumar, G.; Raju, R. An Automated Deep Learning Model for the Cerebellum Segmentation from Fetal Brain Images. BioMed Res. Int. 2022, 2022, 8342767. [Google Scholar] [CrossRef] [PubMed]
  97. Wang, X.; Liu, Z.; Du, Y.; Diao, Y.; Liu, P.; Lv, G.; Zhang, H. Recognition of Fetal Facial Ultrasound Standard Plane Based on Texture Feature Fusion. Comput. Math. Methods Med. 2021, 2021, 656942. [Google Scholar] [CrossRef] [PubMed]
  98. Yu, Z.; Tan, E.-L.; Ni, D.; Qin, J.; Chen, S.; Li, S.; Lei, B.; Wang, T. A Deep Convolutional Neural Network-Based Framework for Automatic Fetal Facial Standard Plane Recognition. IEEE J. Biomed. Health Inform. 2018, 22, 874–885. [Google Scholar] [CrossRef] [PubMed]
  99. Tang, J.; Han, J.; Xie, B.; Xue, J.; Zhou, H.; Jiang, Y.; Hu, L.; Chen, C.; Zhang, K.; Zhu, F.; et al. The Two-Stage Ensemble Learning Model Based on Aggregated Facial Features in Screening for Fetal Genetic Diseases. Int. J. Environ. Res. Public Health 2023, 20, 2377. [Google Scholar] [CrossRef]
  100. Hata, T. Current status of fetal neurodevelopmental assessment: Four-dimensional ultrasound study: Fetal neurodevelopmental assessment. J. Obstet. Gynaecol. Res. 2016, 42, 1211–1221. [Google Scholar] [CrossRef]
  101. AboEllail, M.A.M.; Hata, T. Fetal face as important indicator of fetal brain function. J. Perinat. Med. 2017, 45, 729–736. [Google Scholar] [CrossRef]
  102. Miyagi, Y.; Hata, T.; Bouno, S.; Koyanagi, A.; Miyake, T. Artificial intelligence to understand fluctuation of fetal brain activity by recognizing facial expressions. Int. J. Gynecol. Obstet. 2023, 161, 877–885. [Google Scholar] [CrossRef]
  103. Miyagi, Y.; Hata, T.; Bouno, S.; Koyanagi, A.; Miyake, T. Recognition of facial expression of fetuses by artificial intelligence (AI). J. Perinat. Med. 2021, 49, 596–603. [Google Scholar] [CrossRef] [PubMed]
  104. Miyagi, Y.; Hata, T.; Miyake, T. Fetal brain activity and the free energy principle. J. Perinat. Med. 2023, 51, 925–931. [Google Scholar] [CrossRef] [PubMed]
  105. Sun, C.; Groom, K.M.; Oyston, C.; Chamley, L.W.; Clark, A.R.; James, J.L. The placenta in fetal growth restriction: What is going wrong? Placenta 2020, 96, 10–18. [Google Scholar] [CrossRef] [PubMed]
  106. Maltepe, E.; Fisher, S.J. Placenta: The Forgotten Organ. Annu. Rev. Cell Dev. Biol. 2015, 31, 523–552. [Google Scholar] [CrossRef] [PubMed]
  107. Walter, A.; Böckenhoff, P.; Geipel, A.; Gembruch, U.; Engels, A.C. Early sonographic evaluation of the placenta in cases with IUGR: A pilot study. Arch. Gynecol. Obstet. 2020, 302, 337–343. [Google Scholar] [CrossRef] [PubMed]
  108. Hu, R.; Singla, R.; Yan, R.; Mayer, C.; Rohling, R.N. Automated Placenta Segmentation with a Convolutional Neural Network Weighted by Acoustic Shadow Detection. In Proceedings of the 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: New York, NY, USA, 2019; pp. 6718–6723. Available online: https://ieeexplore.ieee.org/document/8857448/ (accessed on 9 July 2023).
  109. Andreasen, L.A.; Feragen, A.; Christensen, A.N.; Thybo, J.K.; Svendsen, M.B.S.; Zepf, K.; Lekadir, K.; Tolsgaard, M.G. Multi-centre deep learning for placenta segmentation in obstetric ultrasound with multi-observer and cross-country generalization. Sci. Rep. 2023, 13, 2221. [Google Scholar] [CrossRef] [PubMed]
  110. Schilpzand, M.; Neff, C.; van Dillen, J.; van Ginneken, B.; Heskes, T.; de Korte, C.; Heuvel, T.v.D. Automatic Placenta Localization from Ultrasound Imaging in a Resource-Limited Setting Using a Predefined Ultrasound Acquisition Protocol and Deep Learning. Ultrasound Med. Biol. 2022, 48, 663–674. [Google Scholar] [CrossRef]
  111. Plasencia, W.; Akolekar, R.; Dagklis, T.; Veduta, A.; Nicolaides, K.H. Placental Volume at 11–13 Weeks’ Gestation in the Prediction of Birth Weight Percentile. Fetal Diagn. Ther. 2011, 30, 23–28. [Google Scholar] [CrossRef]
  112. Schwartz, N.; Oguz, I.; Wang, J.; Pouch, A.; Yushkevich, N.; Parameshwaran, S.; Gee, J.; Yushkevich, P.; Oguz, B. Fully Automated Placental Volume Quantification From 3D Ultrasound for Prediction of Small-for-Gestational-Age Infants. J. Ultrasound Med. 2022, 41, 1509–1524. [Google Scholar] [CrossRef]
  113. Looney, P.; Yin, Y.; Collins, S.L.; Nicolaides, K.H.; Plasencia, W.; Molloholli, M.; Natsis, S.; Stevenson, G.N. Fully Automated 3-D Ultrasound Segmentation of the Placenta, Amniotic Fluid, and Fetus for Early Pregnancy Assessment. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 68, 2038–2047. [Google Scholar] [CrossRef]
  114. Qi, H.; Collins, S.; Noble, J.A. Automatic Lacunae Localization in Placental Ultrasound Images via Layer Aggregation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018, Proceedings of the 1st International Conference, Granada, Spain, 16–20 September 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11071, pp. 921–929. Available online: http://link.springer.com/10.1007/978-3-030-00934-2_102 (accessed on 9 June 2023).
  115. Qi, H.; Collins, S.; Noble, A. Weakly Supervised Learning of Placental Ultrasound Images with Residual Networks. In Medical Image Understanding and Analysis; Valdés Hernández, M., González-Castro, V., Eds.; Communications in Computer and Information Science; Springer International Publishing: Cham, Switzerland, 2017; Volume 723, pp. 98–108. Available online: http://link.springer.com/10.1007/978-3-319-60964-5_9 (accessed on 9 June 2023).
  116. Lei, B.; Yao, Y.; Chen, S.; Li, S.; Li, W.; Ni, D.; Wang, T. Discriminative Learning for Automatic Staging of Placental Maturity via Multi-layer Fisher Vector. Sci. Rep. 2015, 5, 12818. [Google Scholar] [CrossRef] [PubMed]
  117. Li, X.; Yao, Y.; Ni, D.; Chen, S.; Li, S.; Lei, B.; Wang, T. Automatic staging of placental maturity based on dense descriptor. Bio-Med. Mater. Eng. 2014, 24, 2821–2829. [Google Scholar] [CrossRef] [PubMed]
  118. Gupta, K.; Balyan, K.; Lamba, B.; Puri, M.; Sengupta, D.; Kumar, M. Ultrasound placental image texture analysis using artificial intelligence to predict hypertension in pregnancy. J. Matern. Neonatal Med. 2022, 35, 5587–5594. [Google Scholar] [CrossRef] [PubMed]
  119. Sun, H.; Jiao, J.; Ren, Y.; Guo, Y.; Wang, Y. Multimodal fusion model for classifying placenta ultrasound imaging in pregnancies with hypertension disorders. Pregnancy Hypertens. 2023, 31, 46–53. [Google Scholar] [CrossRef] [PubMed]
  120. Sun, H.; Jiao, J.; Ren, Y.; Guo, Y.; Wang, Y. Model application to quantitatively evaluate placental features from ultrasound images with gestational diabetes. J. Clin. Ultrasound 2022, 50, 976–983. [Google Scholar] [CrossRef] [PubMed]
  121. Yang, X.; Chen, Z.; Jia, X. Deep Learning Algorithm-Based Ultrasound Image Information in Diagnosis and Treatment of Pernicious Placenta Previa. Comput. Math. Methods Med. 2022, 2022, 3452176. [Google Scholar] [CrossRef] [PubMed]
  122. Asadpour, V.; Puttock, E.J.; Getahun, D.; Fassett, M.J.; Xie, F. Automated placental abruption identification using semantic segmentation, quantitative features, SVM, ensemble and multi-path CNN. Heliyon 2023, 9, e13577. [Google Scholar] [CrossRef]
  123. Pradipta, G.A.; Wardoyo, R.; Musdholifah, A.; Sanjaya, I.N.H. Machine learning model for umbilical cord classification using combination coiling index and texture feature based on 2-D Doppler ultrasound images. Health Inform. J. 2022, 28, 146045822210842. [Google Scholar] [CrossRef]
  124. Beksaç, M.; Egemen, A.; Izzetoǵlu, K.; Ergün, G.; Erkmen, A.M. An automated intelligent diagnostic system for the interpretation of umbilical artery Doppler velocimetry. Eur. J. Radiol. 1996, 23, 162–167. [Google Scholar] [CrossRef]
  125. Beksaç, M.; Başaran, F.; Eskiizmirliler, S.; Erkmen, A.M.; Yörükan, S. A computerized diagnostic system for the interpretation of umbilical artery blood flow velocity waveforms. Eur. J. Obstet. Gynecol. Reprod. Biol. 1996, 64, 37–42. [Google Scholar] [CrossRef]
  126. Baykal, N.; A Reggia, J.; Yalabik, N.; Erkmen, A.; Beksac, M.S. Interpretation of Doppler blood flow velocity waveforms using neural networks. Proc. Annu. Symp. Comput. Appl. Med. Care 1994, 865–869. [Google Scholar]
  127. Torrents-Barrena, J.; Monill, N.; Piella, G.; Gratacós, E.; Eixarch, E.; Ceresa, M.; Ballester, M.A.G. Assessment of Radiomics and Deep Learning for the Segmentation of Fetal and Maternal Anatomy in Magnetic Resonance Imaging and Ultrasound. Acad. Radiol. 2021, 28, 173–188. [Google Scholar] [CrossRef] [PubMed]
  128. Torrents-Barrena, J.; López-Velazco, R.; Piella, G.; Masoller, N.; Valenzuela-Alcaraz, B.; Gratacós, E.; Eixarch, E.; Ceresa, M.; Ballester, M.G. TTTS-GPS: Patient-specific preoperative planning and simulation platform for twin-to-twin transfusion syndrome fetal surgery. Comput. Methods Programs Biomed. 2019, 179, 104993. [Google Scholar] [CrossRef] [PubMed]
  129. Nicolaides, K.H. The 11–13+6 Weeks Scan; Fetal Medicine Foundation: London, UK, 2004. [Google Scholar]
  130. Salomon, L.J.; Alfirevic, Z.; Bilardo, C.M.; Chalouhi, G.E.; Ghi, T.; Kagan, K.O.; Lau, T.K.; Papageorghiou, A.T.; Raine-Fenning, N.J.; Stirnemann, J.; et al. ISUOG Practice Guidelines: Performance of first-trimester fetal ultrasound scan. Ultrasound Obstet. Gynecol. 2013, 41, 102–113. [Google Scholar] [PubMed]
  131. Snijders, R.; Noble, P.; Sebire, N.; Souka, A.; Nicolaides, K. UK multicentre project on assessment of risk of trisomy 21 by maternal age and fetal nuchal-translucency thickness at 10–14 weeks of gestation. Lancet 1998, 352, 343–346. [Google Scholar] [CrossRef] [PubMed]
  132. Walker, M.C.; Willner, I.; Miguel, O.X.; Murphy, M.S.Q.; El-Chaâr, D.; Moretti, F.; Harvey, A.L.J.D.; White, R.R.; Muldoon, K.A.; Carrington, A.M.; et al. Using deep-learning in fetal ultrasound analysis for diagnosis of cystic hygroma in the first trimester. PLoS ONE 2022, 17, e0269323. [Google Scholar] [CrossRef] [PubMed]
  133. Zhang, L.; Dong, D.; Sun, Y.; Hu, C.; Sun, C.; Wu, Q.; Tian, J. Development and Validation of a Deep Learning Model to Screen for Trisomy 21 During the First Trimester from Nuchal Ultrasonographic Images. JAMA Netw. Open 2022, 5, e2217854. [Google Scholar] [CrossRef]
  134. Sciortino, G.; Tegolo, D.; Valenti, C. Automatic detection and measurement of nuchal translucency. Comput. Biol. Med. 2017, 82, 12–20. [Google Scholar] [CrossRef]
  135. Deng, Y.; Wang, Y.; Chen, P.; Yu, J. A hierarchical model for automatic nuchal translucency detection from ultrasound images. Comput. Biol. Med. 2012, 42, 706–713. [Google Scholar] [CrossRef]
  136. Tsai, P.-Y.; Hung, C.-H.; Chen, C.-Y.; Sun, Y.-N. Automatic Fetal Middle Sagittal Plane Detection in Ultrasound Using Generative Adversarial Network. Diagnostics 2020, 11, 21. [Google Scholar] [CrossRef]
  137. Ryou, H.; Yaqub, M.; Cavallaro, A.; Papageorghiou, A.T.; Noble, J.A. Automated 3D ultrasound image analysis for first trimester assessment of fetal health. Phys. Med. Biol. 2019, 64, 185010. [Google Scholar] [CrossRef] [PubMed]
  138. Yang, X.; Yu, L.; Li, S.; Wen, H.; Luo, D.; Bian, C.; Qin, J.; Ni, D.; Heng, P.-A. Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound. IEEE Trans. Med Imaging 2019, 38, 180–193. [Google Scholar] [CrossRef] [PubMed]
  139. Salomon, L.J.; Alfirevic, Z.; Berghella, V.; Bilardo, C.; Hernandez-Andrade, E.; Johnsen, S.L.; Kalache, K.; Leung, K.; Malinger, G.; Munoz, H.; et al. Practice guidelines for performance of the routine mid-trimester fetal ultrasound scan. Ultrasound Obstet. Gynecol. 2010, 37, 116–126. [Google Scholar] [CrossRef] [PubMed]
  140. Matthew, J.; Skelton, E.; Day, T.G.; Zimmer, V.A.; Gomez, A.; Wheeler, G.; Toussaint, N.; Liu, T.; Budd, S.; Lloyd, K.; et al. Exploring a new paradigm for the fetal anomaly ultrasound scan: Artificial intelligence in real time. Prenat. Diagn. 2022, 42, 49–59. [Google Scholar] [CrossRef] [PubMed]
  141. Cengizler, Ç.; Ün, M.K.; Büyükkurt, S. A Nature-Inspired Search Space Reduction Technique for Spine Identification on Ultrasound Samples of Spina Bifida Cases. Sci. Rep. 2020, 10, 9280. [Google Scholar] [CrossRef] [PubMed]
  142. Meenakshi, S.; Suganthi, M.; Sureshkumar, P. Segmentation and Boundary Detection of Fetal Kidney Images in Second and Third Trimesters Using Kernel-Based Fuzzy Clustering. J. Med Syst. 2019, 43, 203. [Google Scholar] [CrossRef] [PubMed]
  143. Shozu, K.; Komatsu, M.; Sakai, A.; Komatsu, R.; Dozen, A.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Model-Agnostic Method for Thoracic Wall Segmentation in Fetal Ultrasound Videos. Biomolecules 2020, 10, 1691. [Google Scholar] [CrossRef] [PubMed]
  144. Lee, C.; Willis, A.; Chen, C.; Sieniek, M.; Watters, A.; Stetson, B.; Uddin, A.; Wong, J.; Pilgrim, R.; Chou, K.; et al. Development of a Machine Learning Model for Sonographic Assessment of Gestational Age. JAMA Netw. Open 2023, 6, e2248685. [Google Scholar] [CrossRef]
  145. Gomes, R.G.; Vwalika, B.; Lee, C.; Willis, A.; Sieniek, M.; Price, J.T.; Chen, C.; Kasaro, M.P.; Taylor, J.A.; Stringer, E.M.; et al. A mobile-optimized artificial intelligence system for gestational age and fetal malpresentation assessment. Commun. Med. 2022, 2, 128. [Google Scholar] [CrossRef]
  146. Beksaç, M.S.; Odçikin, Z.; Egemen, A.; Karakaş, U. An intelligent diagnostic system for the assessment of gestational age based on ultrasonic fetal head measurements. Technol. Health Care 1996, 4, 223–231. [Google Scholar] [CrossRef]
  147. Namburete, A.I.; Stebbing, R.V.; Kemp, B.; Yaqub, M.; Papageorghiou, A.T.; Noble, J.A. Learning-based prediction of gestational age from ultrasound images of the fetal brain. Med. Image Anal. 2015, 21, 72–86. [Google Scholar] [CrossRef] [PubMed]
  148. Alzubaidi, M.; Agus, M.; Shah, U.; Makhlouf, M.; Alyafei, K.; Househ, M. Ensemble Transfer Learning for Fetal Head Analysis: From Segmentation to Gestational Age and Weight Prediction. Diagnostics 2022, 12, 2229. [Google Scholar] [CrossRef] [PubMed]
  149. Dan, T.; Chen, X.; He, M.; Guo, H.; He, X.; Chen, J.; Xian, J.; Hu, Y.; Zhang, B.; Wang, N.; et al. DeepGA for automatically estimating fetal gestational age through ultrasound imaging. Artif. Intell. Med. 2023, 135, 102453. [Google Scholar] [CrossRef] [PubMed]
  150. Lee, L.H.; Bradburn, E.; Craik, R.; Yaqub, M.; Norris, S.A.; Ismail, L.C.; Ohuma, E.O.; Barros, F.C.; Lambert, A.; Carvalho, M.; et al. Machine learning for accurate estimation of fetal gestational age based on ultrasound images. NPJ Digit. Med. 2023, 6, 36. [Google Scholar] [CrossRef] [PubMed]
  151. Maraci, M.A.; Yaqub, M.; Craik, R.; Beriwal, S.; Self, A.; Von Dadelszen, P.; Papageorghiou, A.; Noble, J.A. Toward point-of-care ultrasound estimation of fetal gestational age from the trans-cerebellar diameter using CNN-based ultrasound image analysis. J. Med. Imaging 2020, 7, 1. [Google Scholar] [CrossRef] [PubMed]
  152. Pokaprakarn, T.; Prieto, J.C.; Price, J.T.; Kasaro, M.P.; Sindano, N.; Shah, H.R.; Peterson, M.; Akapelwa, M.M.; Kapilya, F.M.; Sebastião, Y.V.; et al. AI Estimation of Gestational Age from Blind Ultrasound Sweeps in Low-Resource Settings. NEJM Évid. 2022, 1, EVIDoa2100058. [Google Scholar] [CrossRef] [PubMed]
  153. Prieto, J.C.; Shah, H.; Rosenbaum, A.; Jiang, X.; Musonda, P.; Price, J.; Stringer, E.M.; Vwalika, B.; Stamilio, D.M.; Stringer, J.S.A. An automated framework for image classification and segmentation of fetal ultrasound images for gestational age estimation. In Medical Imaging 2021: Image Processing; Landman, B.A., Išgum, I., Eds.; SPIE: Bellingham, WA, USA, 2021; p. 55. Available online: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11596/2582243/An-automated-framework-for-image-classification-and-segmentation-of-fetal/10.1117/12.2582243.full (accessed on 16 October 2023).
  154. Drukker, L.; Sharma, H.; Droste, R.; Alsharid, M.; Chatelain, P.; Noble, J.A.; Papageorghiou, A.T. Transforming obstetric ultrasound into data science using eye tracking, voice recording, transducer motion and ultrasound video. Sci. Rep. 2021, 11, 14109. [Google Scholar] [CrossRef]
  155. Sharma, H.; Drukker, L.; Chatelain, P.; Droste, R.; Papageorghiou, A.T.; Noble, J.A. Knowledge representation and learning of operator clinical workflow from full-length routine fetal ultrasound scan videos. Med. Image Anal. 2021, 69, 101973. [Google Scholar] [CrossRef]
  156. Wang, Y.; Yang, Q.; Drukker, L.; Papageorghiou, A.; Hu, Y.; Noble, J.A. Task model-specific operator skill assessment in routine fetal ultrasound scanning. Int. J. CARS 2022, 17, 1437–1444. [Google Scholar] [CrossRef]
  157. Zhao, C.; Droste, R.; Drukker, L.; Papageorghiou, A.T.; Noble, J.A. Visual-Assisted Probe Movement Guidance for Obstetric Ultrasound Scanning Using Landmark Retrieval. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021, Proceedings of the 24th International Conference, Strasbourg, France, 27 September–1 October 2021; De Bruijne, M., Cattin, P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2021; Volume 12908, pp. 670–679. Available online: https://link.springer.com/10.1007/978-3-030-87237-3_64 (accessed on 9 June 2023).
  158. Drukker, L.; Sharma, H.; Karim, J.N.; Droste, R.; Noble, J.A.; Papageorghiou, A.T. Clinical workflow of sonographers performing fetal anomaly ultrasound scans: Deep-learning-based analysis. Ultrasound Obstet. Gynecol. 2022, 60, 759–765. [Google Scholar] [CrossRef]
  159. Alsharid, M.; Cai, Y.; Sharma, H.; Drukker, L.; Papageorghiou, A.T.; Noble, J.A. Gaze-assisted automatic captioning of fetal ultrasound videos using three-way multi-modal deep neural networks. Med. Image Anal. 2022, 82, 102630. [Google Scholar] [CrossRef] [PubMed]
  160. Sharma, H.; Drukker, L.; Papageorghiou, A.T.; Noble, J.A. Multi-Modal Learning from Video, Eye Tracking, and Pupillometry for Operator Skill Characterization in Clinical Fetal Ultrasound. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; pp. 1646–1649. Available online: https://ieeexplore.ieee.org/document/9433863/ (accessed on 9 June 2023).
  161. Xia, T.-H.; Tan, M.; Li, J.-H.; Wang, J.-J.; Wu, Q.-Q.; Kong, D.-X. Establish a normal fetal lung gestational age grading model and explore the potential value of deep learning algorithms in fetal lung maturity evaluation. Chin. Med. J. 2021, 134, 1828–1837. [Google Scholar] [CrossRef] [PubMed]
  162. Du, Y.; Fang, Z.; Jiao, J.; Xi, G.; Zhu, C.; Ren, Y.; Guo, Y.; Wang, Y. Application of ultrasound-based radiomics technology in fetal-lung-texture analysis in pregnancies complicated by gestational diabetes and/or pre-eclampsia. Ultrasound Obstet. Gynecol. 2021, 57, 804–812. [Google Scholar] [CrossRef] [PubMed]
  163. Chen, P.; Chen, Y.; Deng, Y.; Wang, Y.; He, P.; Lv, X.; Yu, J. A preliminary study to quantitatively evaluate the development of maturation degree for fetal lung based on transfer learning deep model from ultrasound images. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1407–1415. [Google Scholar] [CrossRef] [PubMed]
  164. Du, Y.; Jiao, J.; Ji, C.; Li, M.; Guo, Y.; Wang, Y.; Zhou, J.; Ren, Y. Ultrasound-based radiomics technology in fetal lung texture analysis prediction of neonatal respiratory morbidity. Sci. Rep. 2022, 12, 12747. [Google Scholar] [CrossRef] [PubMed]
  165. Bonet-Carne, E.; Palacio, M.; Cobo, T.; Perez-Moreno, A.; Lopez, M.; Piraquive, J.P.; Ramirez, J.C.; Botet, F.; Marques, F.; Gratacos, E. Quantitative ultrasound texture analysis of fetal lungs to predict neonatal respiratory morbidity. Ultrasound Obstet. Gynecol. 2015, 45, 427–433. [Google Scholar] [CrossRef] [PubMed]
  166. Wu, M.; Fraser, R.F.; Chen, C.W. A Novel Algorithm for Computer-Assisted Measurement of Cervical Length from Transvaginal Ultrasound Images. IEEE Trans. Inform. Technol. Biomed. 2004, 8, 333–342. [Google Scholar] [CrossRef]
  167. He, H.; Liu, R.; Zhou, X.; Zhang, Y.; Yu, B.; Xu, Z.; Huang, H. B-Ultrasound Image Analysis of Intrauterine Pregnancy Residues after Mid-Term Pregnancy Based on Smart Medical Big Data. J. Healthc. Eng. 2022, 2022, 9937051. [Google Scholar] [CrossRef]
  168. Wang, Q.; Liu, D.; Liu, G. Value of Ultrasonic Image Features in Diagnosis of Perinatal Outcomes of Severe Preeclampsia on account of Deep Learning Algorithm. Comput. Math. Methods Med. 2022, 2022, 4010339. [Google Scholar] [CrossRef]
  169. Liu, S.; Sun, Y.; Luo, N. Doppler Ultrasound Imaging Combined with Fetal Heart Detection in Predicting Fetal Distress in Pregnancy-Induced Hypertension under the Guidance of Artificial Intelligence Algorithm. J. Healthc. Eng. 2021, 2021, 4405189. [Google Scholar] [CrossRef]
  170. Wang, Y.; Zhang, Q.; Yin, C.; Chen, L.; Yang, Z.; Jia, S.; Sun, X.; Bai, Y.; Han, F.; Yuan, Z. Automated prediction of early spontaneous miscarriage based on the analyzing ultrasonographic gestational sac imaging by the convolutional neural network: A case-control and cohort study. BMC Pregnancy Childbirth 2022, 22, 621. [Google Scholar] [CrossRef] [PubMed]
  171. Sur, S.D.; Jayaprakasan, K.; Jones, N.W.; Clewes, J.; Winter, B.; Cash, N.; Campbell, B.; Raine-Fenning, N.J. A Novel Technique for the Semi-Automated Measurement of Embryo Volume: An Intraobserver Reliability Study. Ultrasound Med. Biol. 2010, 36, 719–725. [Google Scholar] [CrossRef] [PubMed]
  172. Ghi, T.; Conversano, F.; Zegarra, R.R.; Pisani, P.; Dall’Asta, A.; Lanzone, A.; Lau, W.; Vimercati, A.; Iliescu, D.G.; Mappa, I.; et al. Novel artificial intelligence approach for automatic differentiation of fetal occiput anterior and non-occiput anterior positions during labor. Ultrasound Obstet. Gynecol. 2021, 59, 93–99. [Google Scholar] [CrossRef] [PubMed]
  173. Lu, Y.; Zhi, D.; Zhou, M.; Lai, F.; Chen, G.; Ou, Z.; Zeng, R.; Long, S.; Qiu, R.; Zhou, M.; et al. Multitask Deep Neural Network for the Fully Automatic Measurement of the Angle of Progression. Comput. Math. Methods Med. 2022, 2022, 5192338. [Google Scholar] [CrossRef] [PubMed]
  174. Bai, J.; Sun, Z.; Yu, S.; Lu, Y.; Long, S.; Wang, H.; Qiu, R.; Ou, Z.; Zhou, M.; Zhi, D.; et al. A framework for computing angle of progression from transperineal ultrasound images for evaluating fetal head descent using a novel double branch network. Front. Physiol. 2022, 13, 940150. [Google Scholar] [CrossRef] [PubMed]
  175. Wu, L.; Cheng, J.-Z.; Li, S.; Lei, B.; Wang, T.; Ni, D. FUIQA: Fetal Ultrasound Image Quality Assessment with Deep Convolutional Networks. IEEE Trans. Cybern. 2017, 47, 1336–1349. [Google Scholar] [CrossRef] [PubMed]
  176. Meng, Q.; Housden, J.; Matthew, J.; Rueckert, D.; Schnabel, J.A.; Kainz, B.; Sinclair, M.; Zimmer, V.; Hou, B.; Rajchl, M.; et al. Weakly Supervised Estimation of Shadow Confidence Maps in Fetal Ultrasound Imaging. IEEE Trans. Med. Imaging 2019, 38, 2755–2767. [Google Scholar] [CrossRef]
  177. Gupta, L.; Sisodia, R.S.; Pallavi, V.; Firtion, C.; Ramachandran, G. Segmentation of 2D fetal ultrasound images by exploiting context information using conditional random fields. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; IEEE: New York, NY, USA, 2011; pp. 7219–7222. [Google Scholar]
  178. Yin, P.; Wang, H. Evaluation of Nursing Effect of Pelvic Floor Rehabilitation Training on Pelvic Organ Prolapse in Postpartum Pregnant Women under Ultrasound Imaging with Artificial Intelligence Algorithm. Comput. Math. Methods Med. 2022, 2022, 1786994. [Google Scholar] [CrossRef]
  179. Cho, H.C.; Sun, S.; Hyun, C.M.; Kwon, J.-Y.; Kim, B.; Park, Y.; Seo, J.K. Automated ultrasound assessment of amniotic fluid index using deep learning. Med. Image Anal. 2021, 69, 101951. [Google Scholar] [CrossRef]
  180. Compagnone, C.; Borrini, G.; Calabrese, A.; Taddei, M.; Bellini, V.; Bignami, E. Artificial intelligence enhanced ultrasound (AI-US) in a severe obese parturient: A case report. Ultrasound J. 2022, 14, 34. [Google Scholar] [CrossRef]
  181. Rueda, S.; Knight, C.L.; Papageorghiou, A.T.; Noble, J.A. Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step. Med. Image Anal. 2015, 26, 30–46. [Google Scholar] [CrossRef]
  182. Kaplan, E.; Ekinci, T.; Kaplan, S.; Barua, P.D.; Dogan, S.; Tuncer, T.; Tan, R.S.; Arunkumar, N.; Acharya, U.R. PFP-LHCINCA: Pyramidal Fixed-Size Patch-Based Feature Extraction and Chi-Square Iterative Neighborhood Component Analysis for Automated Fetal Sex Classification on Ultrasound Images. Contrast Media Mol. Imaging 2022, 2022, 6034971. [Google Scholar] [CrossRef] [PubMed]
  183. Timmerman, D.; Testa, A.C.; Bourne, T.; Ameye, L.; Jurkovic, D.; Van Holsbeke, C.; Paladini, D.; Van Calster, B.; Vergote, I.; Van Huffel, S.; et al. Simple ultrasound-based rules for the diagnosis of ovarian cancer. Ultrasound Obstet. Gynecol. 2008, 31, 681–690. [Google Scholar] [CrossRef] [PubMed]
  184. Amor, F.; Vaccaro, H.; Alcázar, J.L.; León, M.; Craig, J.M.; Martinez, J. Gynecologic Imaging Reporting and Data System: A New Proposal for Classifying Adnexal Masses on the Basis of Sonographic Findings. J. Ultrasound Med. 2009, 28, 285–291. [Google Scholar] [CrossRef] [PubMed]
  185. Hsu, S.-T.; Su, Y.-J.; Hung, C.-H.; Chen, M.-J.; Lu, C.-H.; Kuo, C.-E. Automatic ovarian tumors recognition system based on ensemble convolutional neural network with ultrasound imaging. BMC Med. Inform. Decis. Mak. 2022, 22, 298. [Google Scholar] [CrossRef] [PubMed]
  186. Al-Karawi, D.; Al-Assam, H.; Du, H.; Sayasneh, A.; Landolfo, C.; Timmerman, D.; Bourne, T.; Jassim, S. An Evaluation of the Effectiveness of Image-based Texture Features Extracted from Static B-mode Ultrasound Images in Distinguishing between Benign and Malignant Ovarian Masses. Ultrason. Imaging 2021, 43, 124–138. [Google Scholar] [CrossRef] [PubMed]
  187. Aramendía-Vidaurreta, V.; Cabeza, R.; Villanueva, A.; Navallas, J.; Alcázar, J.L. Ultrasound Image Discrimination between Benign and Malignant Adnexal Masses Based on a Neural Network Approach. Ultrasound Med. Biol. 2016, 42, 742–752. [Google Scholar] [CrossRef]
  188. Christiansen, F.; Epstein, E.L.; Smedberg, E.; Åkerlund, M.; Smith, K. Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: Comparison with expert subjective assessment. Ultrasound Obstet. Gynecol. 2021, 57, 155–163. [Google Scholar] [CrossRef]
  189. Gao, Y.; Zeng, S.; Xu, X.; Li, H.; Yao, S.; Song, K.; Li, X.; Chen, L.; Tang, J.; Xing, H.; et al. Deep learning-enabled pelvic ultrasound images for accurate diagnosis of ovarian cancer in China: A retrospective, multicentre, diagnostic study. Lancet Digit. Health 2022, 4, e179–e187. [Google Scholar] [CrossRef]
  190. Jung, Y.; Kim, T.; Han, M.R.; Kim, S.; Kim, G.; Lee, S.; Choi, Y.J. Ovarian tumor diagnosis using deep convolutional neural networks and a denoising convolutional autoencoder. Sci Rep. 2022, 12, 17024. [Google Scholar] [CrossRef]
  191. Martínez-Más, J.; Bueno-Crespo, A.; Khazendar, S.; Remezal-Solano, M.; Martínez-Cendán, J.-P.; Jassim, S.; Du, H.; Al Assam, H.; Bourne, T.; Timmerman, D. Evaluation of machine learning methods with Fourier Transform features for classifying ovarian tumors based on ultrasound images. PLoS ONE 2019, 14, e0219388. [Google Scholar] [CrossRef] [PubMed]
  192. Chen, H.; Yang, B.-W.; Qian, L.; Meng, Y.-S.; Bai, X.-H.; Hong, X.-W.; He, X.; Jiang, M.-J.; Yuan, F.; Du, Q.-W.; et al. Deep Learning Prediction of Ovarian Malignancy at US Compared with O-RADS and Expert Assessment. Radiology 2022, 304, 106–113. [Google Scholar] [CrossRef] [PubMed]
  193. Nero, C.; Ciccarone, F.; Boldrini, L.; Lenkowicz, J.; Paris, I.; Capoluongo, E.D.; Testa, A.C.; Fagotti, A.; Valentini, V.; Scambia, G. Germline BRCA 1-2 status prediction through ovarian ultrasound images radiogenomics: A hypothesis generating study (PROBE study). Sci. Rep. 2020, 10, 16511. [Google Scholar] [CrossRef] [PubMed]
  194. Chen, L.; Qiao, C.; Wu, M.; Cai, L.; Yin, C.; Yang, M.; Sang, X.; Bai, W. Improving the Segmentation Accuracy of Ovarian-Tumor Ultrasound Images Using Image Inpainting. Bioengineering 2023, 10, 184. [Google Scholar] [CrossRef] [PubMed]
  195. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  196. Ren, W.; Chen, M.; Qiao, Y.; Zhao, F. Global guidelines for breast cancer screening: A systematic review. Breast 2022, 64, 85–99. [Google Scholar] [CrossRef] [PubMed]
  197. Berg, W.A.; Aldrete, A.-L.L.; Jairaj, A.; Parea, J.C.L.; García, C.Y.; McClennan, R.C.; Cen, S.Y.; Larsen, L.H.; de Lara, M.T.S.; Love, S. Toward AI-supported US Triage of Women with Palpable Breast Lumps in a Low-Resource Setting. Radiology 2023, 307, e223351. [Google Scholar] [CrossRef]
  198. Huang, X.; Qiu, Y.; Bao, F.; Wang, J.; Lin, C.; Lin, Y.; Wu, J.; Yang, H. Artificial intelligence breast ultrasound and handheld ultrasound in the BI-RADS categorization of breast lesions: A pilot head to head comparison study in screening program. Front. Public Health 2023, 10, 1098639. [Google Scholar] [CrossRef]
  199. Browne, J.L.; Pascual, M.Á.; Perez, J.; Salazar, S.; Valero, B.; Rodriguez, I.; Cassina, D.; Alcazar, J.L.; Guerriero, S.; Graupera, B. AI: Can It Make a Difference to the Predictive Value of Ultrasound Breast Biopsy? Diagnostics 2023, 13, 811. [Google Scholar] [CrossRef]
  200. Pfob, A.; Sidey-Gibbons, C.; Barr, R.G.; Duda, V.; Alwafai, Z.; Balleyguier, C.; Clevert, D.-A.; Fastner, S.; Gomez, C.; Goncalo, M.; et al. Intelligent multi-modal shear wave elastography to reduce unnecessary biopsies in breast cancer diagnosis (INSPiRED 002): A retrospective, international, multicentre analysis. Eur. J. Cancer 2022, 177, 1–14. [Google Scholar] [CrossRef]
  201. Dong, F.; She, R.; Cui, C.; Shi, S.; Hu, X.; Zeng, J.; Wu, H.; Xu, J.; Zhang, Y. One step further into the blackbox: A pilot study of how to build more confidence around an AI-based decision system of breast nodule assessment in 2D ultrasound. Eur. Radiol. 2021, 31, 4991–5000. [Google Scholar] [CrossRef] [PubMed]
  202. Pfob, A.; Sidey-Gibbons, C.; Barr, R.G.; Duda, V.; Alwafai, Z.; Balleyguier, C.; Clevert, D.-A.; Fastner, S.; Gomez, C.; Goncalo, M.; et al. The importance of multi-modal imaging and clinical information for humans and AI-based algorithms to classify breast masses (INSPiRED 003): An international, multicenter analysis. Eur. Radiol. 2022, 32, 4101–4115. [Google Scholar] [CrossRef] [PubMed]
  203. Heremans, R.; Bosch, T.V.D.; Valentin, L.; Wynants, L.; Pascual, M.A.; Fruscio, R.; Testa, A.C.; Buonomo, F.; Guerriero, S.; Epstein, E.; et al. Ultrasound features of endometrial pathology in women without abnormal uterine bleeding: Results from the International Endometrial Tumor Analysis study (IETA3). Ultrasound Obstet. Gynecol. 2022, 60, 243–255. [Google Scholar] [CrossRef] [PubMed]
  204. Vitale, S.G.; Riemma, G.; Haimovich, S.; Carugno, J.; Pacheco, L.A.; Perez-Medina, T.; Parry, J.P.; Török, P.; Tesarik, J.; Della Corte, L.; et al. Risk of endometrial cancer in asymptomatic postmenopausal women in relation to ultrasonographic endometrial thickness: Systematic review and diagnostic test accuracy meta-analysis. Am. J. Obstet. Gynecol. 2022, 228, 22–35.e2. [Google Scholar] [CrossRef] [PubMed]
  205. Zhao, X.; Wu, S.; Zhang, B.; Burjoo, A.; Yang, Y.; Xu, D. Artificial intelligence diagnosis of intrauterine adhesion by 3D ultrasound imaging: A prospective study. Quant. Imaging Med. Surg. 2023, 13, 2314–2327. [Google Scholar] [CrossRef] [PubMed]
  206. Wang, X.; Bao, N.; Xin, X.; Tan, J.; Li, H.; Zhou, S.; Liu, H. Automatic evaluation of endometrial receptivity in three-dimensional transvaginal ultrasound images based on 3D U-Net segmentation. Quant. Imaging Med. Surg. 2022, 12, 4095–4108. [Google Scholar] [CrossRef] [PubMed]
  207. Park, H.; Lee, H.J.; Kim, H.G.; Ro, Y.M.; Shin, D.; Lee, S.R.; Kim, S.H.; Kong, M. Endometrium segmentation on transvaginal ultrasound image using key-point discriminator. Med. Phys. 2019, 46, 3974–3984. [Google Scholar] [CrossRef]
  208. Liu, Y.; Zhou, Q.; Peng, B.; Jiang, J.; Fang, L.; Weng, W.; Wang, W.; Wang, S.; Zhu, X. Automatic Measurement of Endometrial Thickness from Transvaginal Ultrasound Images. Front. Bioeng. Biotechnol. 2022, 10, 853845. [Google Scholar] [CrossRef]
  209. Moro, F.; Albanese, M.; Boldrini, L.; Chiappa, V.; Lenkowicz, J.; Bertolina, F.; Mascilini, F.; Moroni, R.; Gambacorta, M.A.; Raspagliesi, F.; et al. Developing and validating ultrasound-based radiomics models for predicting high-risk endometrial cancer. Ultrasound Obstet. Gynecol. 2022, 60, 256–268. [Google Scholar] [CrossRef]
  210. Zhu, Y.; Zhang, J.; Ji, Z.; Liu, W.; Li, M.; Xia, E.; Zhang, J.; Wang, J. Ultrasound Evaluation of Pelvic Floor Function after Transumbilical Laparoscopic Single-Site Total Hysterectomy Using Deep Learning Algorithm. Comput. Math. Methods Med. 2022, 2022, 1116332. [Google Scholar] [CrossRef]
  211. Williams, H.; Cattani, L.; Van Schoubroeck, D.; Yaqub, M.; Sudre, C.; Vercauteren, T.; D’Hooge, J.; Deprest, J. Automatic Extraction of Hiatal Dimensions in 3-D Transperineal Pelvic Ultrasound Recordings. Ultrasound Med. Biol. 2021, 47, 3470–3479. [Google Scholar] [CrossRef] [PubMed]
  212. Szentimrey, Z.; Ameri, G.; Hong, C.X.; Cheung, R.Y.K.; Ukwatta, E.; Eltahawi, A. Automated segmentation and measurement of the female pelvic floor from the mid-sagittal plane of 3D ultrasound volumes. Med. Phys. 2023, 50, 6215–6227. [Google Scholar] [CrossRef] [PubMed]
  213. Van Den Noort, F.; Manzini, C.; Van Der Vaart, C.H.; Van Limbeek, M.A.J.; Slump, C.H.; Grob, A.T.M. Automatic identification and segmentation of slice of minimal hiatal dimensions in transperineal ultrasound volumes. Ultrasound Obstet. Gynecol. 2022, 60, 570–576. [Google Scholar] [CrossRef] [PubMed]
  214. Van den Noort, F.; van der Vaart, C.H.; Grob, A.T.M.; van de Waarsenburg, M.K.; Slump, C.H.; van Stralen, M. Deep learning enables automatic quantitative assessment of puborectalis muscle and urogenital hiatus in plane of minimal hiatal dimensions. Ultrasound Obstet. Gynecol. 2019, 54, 270–275. [Google Scholar] [CrossRef] [PubMed]
  215. Wu, S.; Ren, Y.; Lin, X.; Huang, Z.; Zheng, Z.; Zhang, X. Development and validation of a composite AI model for the diagnosis of levator ani muscle avulsion. Eur. Radiol. 2022, 32, 5898–5906. [Google Scholar] [CrossRef] [PubMed]
  216. Maicas, G.; Leonardi, M.; Avery, J.; Panuccio, C.; Carneiro, G.; Hull, M.L.; Condous, G. Deep learning to diagnose pouch of Douglas obliteration with ultrasound sliding sign. Reprod. Fertil. 2021, 2, 236–243. [Google Scholar] [CrossRef] [PubMed]
  217. Raimondo, D.; Raffone, A.; Aru, A.C.; Giorgi, M.; Giaquinto, I.; Spagnolo, E.; Travaglino, A.; Galatolo, F.A.; Cimino, M.G.C.A.; Lenzi, J.; et al. Application of Deep Learning Model in the Sonographic Diagnosis of Uterine Adenomyosis. Int. J. Environ. Res. Public Health 2023, 20, 1724. [Google Scholar] [CrossRef]
  218. Zhang, Y.; Hou, J.; Wang, Q.; Hou, A.; Liu, Y. Application of Transfer Learning and Feature Fusion Algorithms to Improve the Identification and Prediction Efficiency of Premature Ovarian Failure. J. Healthc. Eng. 2022, 2022, 3269692. [Google Scholar] [CrossRef]
  219. Yu, L.; Qing, X. Diagnosis of Idiopathic Premature Ovarian Failure by Color Doppler Ultrasound under the Intelligent Segmentation Algorithm. Comput. Math. Methods Med. 2022, 2022, 2645607. [Google Scholar] [CrossRef]
  220. Huo, T.; Li, L.; Chen, X.; Wang, Z.; Zhang, X.; Liu, S.; Huang, J.; Zhang, J.; Yang, Q.; Wu, W.; et al. Artificial intelligence-aided method to detect uterine fibroids in ultrasound images: A retrospective study. Sci. Rep. 2023, 13, 3714. [Google Scholar] [CrossRef]
  221. Yang, T.; Yuan, L.; Li, P.; Liu, P. Real-Time Automatic Assisted Detection of Uterine Fibroid in Ultrasound Images Using a Deep Learning Detector. Ultrasound Med. Biol. 2023, 49, 1616–1626. [Google Scholar] [CrossRef] [PubMed]
  222. Singh, V.K.; Yousef Kalafi, E.; Cheah, E.; Wang, S.; Wang, J.; Ozturk, A.; Li, Q.; Eldar, Y.C.; Samir, A.E.; Kumar, V. HaTU-Net: Harmonic Attention Network for Automated Ovarian Ultrasound Quantification in Assisted Pregnancy. Diagnostics 2022, 12, 3213. [Google Scholar] [CrossRef] [PubMed]
  223. Noor, N.; Vignarajan, C.; Malhotra, N.; Vanamail, P. Three-Dimensional Automated Volume Calculation (Sonography-Based Automated Volume Count) versus Two-Dimensional Manual Ultrasonography for Follicular Tracking and Oocyte Retrieval in Women Undergoing in vitro Fertilization-Embryo Transfer: A Randomized Controlled Trial. J. Hum. Reprod. Sci. 2020, 13, 296. [Google Scholar] [PubMed]
  224. Maurice, P.; Dhombres, F.; Blondiaux, E.; Friszer, S.; Guilbaud, L.; Lelong, N.; Khoshnood, B.; Charlet, J.; Perrot, N.; Jauniaux, E.; et al. Towards ontology-based decision support systems for complex ultrasound diagnosis in obstetrics and gynecology. J. Gynecol. Obstet. Hum. Reprod. 2017, 46, 423–429. [Google Scholar] [CrossRef] [PubMed]
  225. Dhombres, F.; Maurice, P.; Friszer, S.; Guilbaud, L.; Lelong, N.; Khoshnood, B.; Charlet, J.; Perrot, N.; Jauniaux, E.; Jurkovic, D.; et al. Developing a knowledge base to support the annotation of ultrasound images of ectopic pregnancy. J. Biomed. Semant. 2017, 8, 4. [Google Scholar] [CrossRef] [PubMed]
  226. Huh, J.; Khan, S.; Choi, S.; Shin, D.; Lee, J.E.; Lee, E.S.; Ye, J.C. Tunable image quality control of 3-D ultrasound using switchable CycleGAN. Med. Image Anal. 2023, 83, 102651. [Google Scholar] [CrossRef] [PubMed]
  227. Kalantaridou, S.N.; Nelson, L.M. Premature ovarian failure is not premature menopause. Ann. N. Y. Acad. Sci. 2006, 900, 393–402. [Google Scholar] [CrossRef]
  228. Watzenboeck, M.L.; Heidinger, B.H.; Rainer, J.; Schmidbauer, V.; Ulm, B.; Rubesova, E.; Prayer, D.; Kasprian, G.; Prayer, F. Reproducibility of 2D versus 3D radiomics for quantitative assessment of fetal lung development: A retrospective fetal MRI study. Insights Imaging 2023, 14, 31. [Google Scholar] [CrossRef]
  229. Prayer, F.; Watzenböck, M.L.; Heidinger, B.H.; Rainer, J.; Schmidbauer, V.; Prosch, H.; Ulm, B.; Rubesova, E.; Prayer, D.; Kasprian, G. Fetal MRI radiomics: Non-invasive and reproducible quantification of human lung maturity. Eur. Radiol. 2023, 33, 4205–4213. [Google Scholar] [CrossRef]
  230. Liu, Y.; Zhang, Y.; Cheng, R.; Liu, S.; Qu, F.; Yin, X.; Wang, Q.; Xiao, B.; Ye, Z. Radiomics analysis of apparent diffusion coefficient in cervical cancer: A preliminary study on histological grade evaluation: Radiomic Features in Uterine Cervical Cancer. J. Magn. Reson. Imaging 2019, 49, 280–290. [Google Scholar] [CrossRef]
  231. Drukker, L.; Noble, J.A.; Papageorghiou, A.T. Introduction to Artificial Intelligence in Ultrasound Imaging in Obstetrics and Gynecology. Obstet. Gynecol. Surv. 2021, 76, 127–129. [Google Scholar] [CrossRef]
  232. Jani, J.; Peralta, C.F.A.; Benachi, A.; Deprest, J.; Nicolaides, K.H. Assessment of lung area in fetuses with congenital diaphragmatic hernia. Ultrasound Obstet. Gynecol. 2007, 30, 72–76. [Google Scholar] [CrossRef]
  233. World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance; Licence: CC BY-NC-SA 3.0 IGO; WHO: Switzerland, Geneva, 2021. [Google Scholar]
  234. Dhombres, F.; Bonnard, J.; Bailly, K.; Maurice, P.; Papageorghiou, A.T.; Jouannic, J.-M. Contributions of Artificial Intelligence Reported in Obstetrics and Gynecology Journals: Systematic Review. J. Med. Internet Res. 2022, 24, e35465. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram for the screening process of reports included in this review.
Figure 1. PRISMA flow diagram for the screening process of reports included in this review.
Jcm 12 06833 g001
Figure 2. Overview of the distribution of research topics in the analyzed literature (a total of 148 articles) for AI applications in US imaging in the subspecialty of obstetrics. Figure adapted from Servier Medical Art.
Figure 2. Overview of the distribution of research topics in the analyzed literature (a total of 148 articles) for AI applications in US imaging in the subspecialty of obstetrics. Figure adapted from Servier Medical Art.
Jcm 12 06833 g002
Figure 3. Overview of the distribution of research topics in the analyzed literature (a total of 41 articles) for AI applications in US imaging in the subspecialty of gynecology. Figure adapted from Servier Medical Art.
Figure 3. Overview of the distribution of research topics in the analyzed literature (a total of 41 articles) for AI applications in US imaging in the subspecialty of gynecology. Figure adapted from Servier Medical Art.
Jcm 12 06833 g003
Table 1. PICOS search tool headings for literature evaluation [12].
Table 1. PICOS search tool headings for literature evaluation [12].
PICOS Search Tool Headings for Literature Evaluation
ParticipantsExaminer: Healthcare professionals in OB/GYN or radiology, AI specialists
Patients: Healthy pregnant and non-pregnant women or women with any gynecological or obstetric disease/complication, OB/GYN training models
Intervention or ExposureAI-assisted US applications
ComparisonComparison of AI US algorithms to human US examiners or another AI algorithm
OutcomeFields of AI applications in OB/GYN US imaging, benefits and limitations of AI usage, future aspects for emerging fields of applications
Study type Published literature of any design, excluding trial protocols and reviews
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jost, E.; Kosian, P.; Jimenez Cruz, J.; Albarqouni, S.; Gembruch, U.; Strizek, B.; Recker, F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J. Clin. Med. 2023, 12, 6833. https://doi.org/10.3390/jcm12216833

AMA Style

Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. Journal of Clinical Medicine. 2023; 12(21):6833. https://doi.org/10.3390/jcm12216833

Chicago/Turabian Style

Jost, Elena, Philipp Kosian, Jorge Jimenez Cruz, Shadi Albarqouni, Ulrich Gembruch, Brigitte Strizek, and Florian Recker. 2023. "Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology" Journal of Clinical Medicine 12, no. 21: 6833. https://doi.org/10.3390/jcm12216833

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop