Next Article in Journal
Dynamic Surface Topography for Thoracic and Lumbar Pain Patients—Applicability and First Results
Previous Article in Journal
A Sensor-Based Classification for Neuromotor Robot-Assisted Rehabilitation
Previous Article in Special Issue
Pioneering Data Processing for Convolutional Neural Networks to Enhance the Diagnostic Accuracy of Traditional Chinese Medicine Pulse Diagnosis for Diabetes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Progress in the Application of Artificial Intelligence in Ultrasound-Assisted Medical Diagnosis

by
Li Yan
1,
Qing Li
2,
Kang Fu
1,
Xiaodong Zhou
2,* and
Kai Zhang
3,*
1
Institute of Medical Research, Northwestern Polytechnical University, Xi’an 710072, China
2
Ultrasound Diagnosis & Treatment Center, Xi’an International Medical Center Hospital, Xi’an 710100, China
3
Department of Dermatology and Aesthetic Plastic Surgery, Xi’an No. 3 Hospital, The Affiliated Hospital of Northwest University, Xi’an 718000, China
*
Authors to whom correspondence should be addressed.
Bioengineering 2025, 12(3), 288; https://doi.org/10.3390/bioengineering12030288
Submission received: 1 February 2025 / Revised: 7 March 2025 / Accepted: 12 March 2025 / Published: 13 March 2025

Abstract

:
The integration of artificial intelligence (AI) into ultrasound medicine has revolutionized medical imaging, enhancing diagnostic accuracy and clinical workflows. This review focuses on the applications, challenges, and future directions of AI technologies, particularly machine learning (ML) and its subset, deep learning (DL), in ultrasound diagnostics. By leveraging advanced algorithms such as convolutional neural networks (CNNs), AI has significantly improved image acquisition, quality assessment, and objective disease diagnosis. AI-driven solutions now facilitate automated image analysis, intelligent diagnostic assistance, and medical education, enabling precise lesion detection across various organs while reducing physician workload. AI’s error detection capabilities further enhance diagnostic accuracy. Looking ahead, the integration of AI with ultrasound is expected to deepen, promoting trends in standardization, personalized treatment, and intelligent healthcare, particularly in underserved areas. Despite its potential, comprehensive assessments of AI’s diagnostic accuracy and ethical implications remain limited, necessitating rigorous evaluations to ensure effectiveness in clinical practice. This review provides a systematic evaluation of AI technologies in ultrasound medicine, highlighting their transformative potential to improve global healthcare outcomes.

1. Introduction

The rapid advancement of medical imaging technology, coupled with the increasing volume of clinical imaging data and the growing demand for improved patient consultation efficiency and satisfaction, has rendered timely and accurate diagnosis, classification, and prognostic evaluation of diseases focal points of contemporary research. In recent years, the swift development of artificial intelligence (AI) technology has facilitated its widespread and profound application in the healthcare sector. Notably, in the field of ultrasound medicine, the introduction of key technologies such as machine learning (ML), and medical big data has revolutionized traditional diagnostic methods [1,2]. These technological advancements have led to the creation of robust, adaptable AI models that improve image acquisition and allow for real-time quality assessment. They also offer objective disease detection and diagnosis while streamlining clinical workflows within ultrasound practices [3,4].
The integration of AI in medical imaging has demonstrated modality-specific applications, each presenting unique advantages and challenges. While AI in X-ray and CT imaging primarily leverages extensive standardized image databases to detect and diagnose pathologies automatically, the static nature of these imaging modalities constrains AI’s role in real-time diagnostics [5,6]. Magnetic Resonance Imaging (MRI), with its high resolution and tissue contrast, offers rich analytical potential for AI, yet its high cost, extended scanning times, and complex image reconstruction processes have directed AI efforts toward enhancing image quality and reducing scan durations. Conversely, ultrasound, as a dynamic and real-time imaging modality, possesses distinct advantages. Its real-time capability enables AI to facilitate dynamic monitoring and guided interventions, a feature unparalleled by other imaging techniques. Additionally, the portability and cost-effectiveness of ultrasound render it particularly valuable in resource-limited settings and remote healthcare. However, unlike X-ray and CT, ultrasound images are more subject to operator variability and factors such as probe selection and tissue matching, which significantly impact image quality [7]. These factors impose higher demands on the robustness and adaptability of AI systems in ultrasound applications. Consequently, AI-driven ultrasound diagnostics not only holds the promise of enhancing diagnostic accuracy and efficiency but also offers distinctive benefits in clinical scenarios requiring rapid, cost-effective, and flexible diagnostic solutions.
Ultrasound, as a non-invasive, convenient, and cost-effective diagnostic and therapeutic modality, effectively reflects the morphology and function of human tissues and organs, thereby playing a significant role in clinical applications. However, in actual ultrasound diagnostic practice, results may exhibit deviations influenced by the physician’s experience and subjective factors [8]. Furthermore, the high workload associated with ultrasound examinations and the relatively low efficiency of some practitioners indicate considerable room for improvement in patient satisfaction with ultrasound diagnostics and treatment.
The deep integration of AI with ultrasound technology has not only enhanced the accuracy and efficiency of diagnostics but has also propelled the intelligent advancement of ultrasound medical technology. This review is organized as follows: Section 1 delineates the technological foundations of AI in ultrasound medicine, encompassing machine learning, deep learning, and convolutional neural networks. Section 2 systematically reviews current AI applications in ultrasound diagnostics, including assisted diagnosis, error correction, and medical education. Section 3 explores future development trends across technological integration, standardization, personalized treatment, and telemedicine. Finally, challenges and conclusions are presented to guide further research directions.

2. The Technological Foundations of AI in Ultrasound Medicine

The application of AI in ultrasound medicine is fundamentally grounded in advanced technologies such as ML and its application in image recognition algorithms (Figure 1). These algorithms, trained on extensive datasets of ultrasound images, can automatically identify pathological features, thereby assisting physicians in achieving more accurate diagnoses. The workflow of AI-assisted ultrasound diagnosis typically consists of several steps: first, the acquired ultrasound images undergo preprocessing through computational techniques, including denoising and enhancement, to improve image quality; subsequently, AI algorithms are employed for feature extraction and classification of the preprocessed images; and finally, the AI system provides diagnostic recommendations based on the recognition results, aiding physicians in their decision-making processes (Figure 2).

2.1. Machine Learning

Big data serves as the foundation of AI, and the transformation of big data into knowledge or productivity is inextricably linked to ML, which is defined as the process by which a computer learns from experiences and performs predefined tasks without prior knowledge [9]. It emphasizes three key concepts: algorithms, experience, and performance. The primary goal of ML is to enable computer systems to learn and improve from data through algorithms, discovering patterns and regularities within the data to make predictions about unseen data. ML typically requires a substantial amount of data to train models. Its algorithms include supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning, among others [10].

2.2. Deep Learning

DL is regarded as a leading AI tool for image analysis [11]. Unlike traditional ML methods that necessitate hand-engineered feature extraction from input images, DL techniques automatically learn the features necessary for data classification [12]. As a subset of ML, DL leverages multi-layer artificial neural networks to process data. This approach enables the analysis of large volumes of data in a hierarchical and nonlinear manner, employing pattern recognition to extract highly representative features from images while continuously learning information representations from raw data [13]. The primary objective of DL is to autonomously learn and discover high-level abstract features from data through hierarchical networks. Within DL models, supervised and unsupervised learning are two principal methodologies that can mutually enhance one another; for example, unsupervised learning can identify features within the data, which can subsequently be utilized for predictions through supervised learning. In recent years, DL technology has witnessed remarkable advancements across various fields, driven by the rapid development of graphics processing units, high-performance central processing units, enhancements in learning algorithms, and the ongoing emergence of large-scale databases.

2.3. Convolutional Neural Networks (CNNs)

CNNs are a class of deep feed-forward neural networks specifically designed for spatial modeling in image analysis [14]. CNNs have played a pivotal role in the adoption of DL for video and image processing applications [15]. In medical image analysis, CNNs are primarily employed for the semantic segmentation of anatomical structures and lesions (Figure 3). A typical CNN architecture consists of three types of layers: convolutional layers, pooling layers, and fully connected layers. The convolutional layers serve as the main structural components of CNNs, compressing input data into recognizable patterns to reduce data size and emphasize relevant features [16]. The core strength of CNNs lies in their deep architecture, which facilitates the extraction of discriminative features at multiple levels of abstraction. This hierarchical learning approach enables CNNs to capture complex patterns in medical images, ultimately enhancing diagnostic accuracy. Consequently, the integration of CNNs into clinical workflows represents a significant advancement in the realm of automated image analysis and decision support systems.

3. Classification and Implementation of AI Technologies in Ultrasound

3.1. AI-Assisted Diagnostic Technologies

AI technologies, particularly those utilizing DL algorithms, have revolutionized the automated analysis of ultrasound images. This category encompasses systems where AI serves as a diagnostic adjunct, enhancing human interpretation without replacing clinical judgment.

3.1.1. Thyroid Nodules

The rising incidence of thyroid cancer and the workload of physicians have driven the need to utilize AI to efficiently process thyroid ultrasound images [17]. In the context of thyroid nodule assessment, AI has emerged as a vital tool aimed at improving diagnosis, evaluation, and management [18,19,20]. The application of AI in the thyroid primarily includes thyroid segmentation [21,22,23,24,25] and the differential diagnosis of thyroid nodules [8,26,27,28,29]. However, most studies focus on utilizing ultrasound features to distinguish between benign and malignant thyroid nodules or to predict cervical lymph node metastasis. By accurately distinguishing between benign and malignant nodules, AI aids clinicians in devising personalized treatment strategies. A critical component of developing effective AI models is the extraction of relevant features from ultrasound images. This involves analyzing textural, shape, and edge characteristics, as well as integrating numerical data such as size, shape, and echogenicity of nodules. Leveraging these features not only enhances the recognition and classification of thyroid nodules but also leads to improved diagnostic accuracy.
In a recent study, Xu, WenWen et al. [8] developed artificial intelligence models for the detection, segmentation, and classification of thyroid nodules using diverse datasets from 208 hospitals in China, achieving average precision, Dice coefficients, and AUC values of 0.98, 0.86, and 0.90, respectively. Rule-based AI assistance significantly improved the diagnostic accuracy of both senior and junior radiologists (p < 0.05), highlighting the potential for AI integration in thyroid cancer detection. Toro-Tobon, David et al. [30] summarized the applications of artificial intelligence in thyroidology in their review, highlighting its potential to enhance the diagnosis and management of thyroid conditions through process automation, improved diagnostic accuracy, and personalized treatment. However, many of these applications remain in the validation or early clinical evaluation stages and face challenges such as limited prospective multicenter studies, small and low-diversity training datasets, and unclear clinical impacts. Addressing these limitations is crucial to ensure that AI interventions deliver meaningful benefits for patients with thyroid diseases.

3.1.2. Breast Nodules

Breast cancer is the most common cancer among women and the leading cause of cancer-related deaths, making early detection crucial for effective treatment and improved survival rates [31,32,33]. Ultrasound examination has become vital for screening and differential diagnosis, but its effectiveness is operator-dependent, and resource distribution can lead to care disparities. Variations in physician experience may further impact diagnosis and prognosis. AI can enhance the detection and diagnosis of breast nodules, serving as a valuable complement to human interpretation. In recent years, DL methods have emerged as a promising avenue for enhancing clinical efficiency and accuracy [34] (Figure 4). At the core of this progress is the use of medical imaging for object classification and segmentation, which has been at the forefront of various pilot studies. Notably, a significant number of these investigations have focused specifically on breast imaging, reflecting the growing interest in applying advanced techniques to this critical area of healthcare [33,35].
The application of AI in breast ultrasound is particularly pronounced in two critical domains, including breast image augmentation [36,37,38,39,40,41,42] and breast lesion detection and diagnosis [36,43,44,45,46,47,48,49,50]. Notably, AI technologies integrated with the Breast Imaging Reporting and Data System (BI-RADS) enable the extraction and quantitative analysis of morphological and textural characteristics. This integration significantly enhances the accuracy and consistency of diagnoses, allowing for better differentiation between benign and malignant nodules. Recently, Magnuska and her colleagues [51] developed a real-time breast tumor classification model based on ultrasound imaging, achieving high diagnostic accuracy by combining classic radiomics and autoencoder features. The model demonstrated excellent performance on a training set of 1191 female patients, with an AUC of 0.90, and showed no significant difference in performance compared to human readers.

3.1.3. Heart Disease

Developments in AI have led to an explosion of studies exploring its application to cardiovascular medicine [18,52]. Several studies investigated the use of AI, notably DL and ML algorithms, for the automated analysis of various echocardiogram types, such as transthoracic echocardiograms (TTE), transesophageal echocardiograms, and Doppler echocardiography, as noted by De Siqueira et al. [53,54], Zamzmi et al. [55] reviewed both manual and automated methods for analyzing echocardiogram modes. Karatzia, Loucia et al. [2] focused on primary studies related to DL algorithms in the automated processing of TTE, assessing the progress and challenges in DL-enhanced clinical decision making. Nedadur, Rashmi et al. [52] explored the application of AI in assessing valvular heart diseases and summarized existing AI methodologies for valvular image analysis and phenotyping. Föllmer, Bernhard et al. [56] reviewed existing evidence on the application of AI to the imaging of vulnerable plaque in coronary arteries and provided consensus recommendations developed by an interdisciplinary group of experts on AI and non-invasive and invasive coronary imaging. While a review from Sengupta, Partho P et al. [57] analyzed key studies and identified challenges that required a pragmatic change in the approach for using AI for cardiac imaging, whereby AI was viewed as augmented intelligence to complement, not replace, human judgment. In echocardiographic diagnosis, AI technologies can automatically identify structural abnormalities in the heart, such as wall motion abnormalities, thereby providing robust support for the early diagnosis of cardiac conditions, including myocardial ischemia.

3.1.4. Liver Diseases

Over the past two decades, hepatology has made remarkable strides in diagnostics, prognostics, and treatment options, evolving into a highly complex medical specialty [58]. AI has been shown as an excellent tool for the study of the liver [59]. Currently, AI algorithms are employed in liver imaging, histopathology interpretation, noninvasive testing, and predictive modeling.
Zamanian, H et al. [60] developed a combined algorithm using neural networks to classify ultrasound images of patients with fatty liver. By analyzing images from 55 patients, pre-trained convolutional neural networks (Inception-ResNetv2, GoogleNet, AlexNet, and ResNet101) extracted features, which were classified using a support vector machine. The combined network achieved an impressive AUC of 0.9999 and an accuracy of 0.9864, demonstrating its high clinical applicability without needing user or expert intervention. In a multicenter study conducted by Yang, Qi et al. [61], they developed a deep convolutional neural network for ultrasound (DCNN-US) to assist radiologists in classifying malignant from benign focal liver lesions (FLLs). The model demonstrated an AUC of 0.924 and exhibited superior diagnostic sensitivity and specificity compared to experienced radiologists, indicating its potential to enhance the performance of less-experienced radiologists and reduce reliance on sectional imaging in liver cancer diagnosis. As to liver transplantation, Uche-Anya, Eugenia et al. [62] emphasized the application value of AI and machine learning specifically in liver transplantation, highlighting their potential to enhance organ allocation processes and improve patient outcomes by accurately assessing hepatic conditions and predicting transplant success.

3.1.5. Obstetrics and Gynecology (OB/GYN)

In contrast to other areas, such as breast and thyroid imaging, the impacts of AI have not been as strongly felt in in the field of OB/GYN [63]. Until mid-2020, most research on AI in OB/GYN was preliminary, often published in non-specialized journals. When findings appeared in OB/GYN journals, they typically lacked validation across multiple datasets and essential clinical validation, both of which are crucial for establishing the reliability of AI processes [64,65]. In a systematic literature review by Jost, Elena et al. [66], a relatively small number of 189 articles were identified regarding the applications of AI in ultrasound imaging within the field of OB/GYN, covering the period from 1994 to 2023. Among these, 148 articles focused on obstetric applications, while 41 addressed gynecological concerns. The applications of AI-assisted ultrasound are diverse, encompassing fetal biometrics, echocardiography or neurosonography, as well as the identification of adnexal and thoracic masses, and the assessment of the endometrium and pelvic floor. Notably, most research has been concentrated in common application areas, particularly fetal biometrics, highlighting a need for further exploration in the field. In OB/GYN ultrasound, significant advancements with the potential to enhance workflow include the automatic identification of standard imaging planes and quality assurance in fetal ultrasounds [67,68,69,70,71,72]. Other developments involve the automated classification of ovarian tumors [73,74,75,76,77], assessment of endometrial and uterine cavity abnormalities [78,79,80] and the pelvic floor dysfunction [81,82,83].
As we explore the complex and opaque nature of emerging AI clinical prediction tools, it is important to recognize that bias can be encoded even in conventional prediction models, including simple, rule-based algorithms [62]. Overall, the application of AI in ultrasound diagnostics enhances clinical decision making and paves the way for more effective patient management across various medical specialties.

3.1.6. Other Applications

The ongoing development of ultrasound and AI technologies has expanded their application to various fields, including pulmonary [84,85,86,87], musculoskeletal [88,89,90,91], and fetal and infant brain [72,92,93,94,95] imaging. This evolution is effectively dismantling traditional limitations in ultrasound examinations, improving imaging quality, optimizing procedural workflows, and advancing the field of ultrasound medicine toward precision healthcare.

3.2. AI-Driven Autonomous Systems

This classification encompasses AI systems capable of end-to-end task execution without human intervention during core analytical processes, fundamentally redefining ultrasound workflow paradigms.
Closed-Loop Diagnostic Architectures: Modern autonomous systems integrate real-time image acquisition with embedded AI processors to achieve self-contained diagnostic workflows. For instance, reinforcement learning agents now dynamically adjust Doppler sampling gates based on vascular flow patterns, automatically optimizing measurement accuracy during cardiac output assessments. These systems not only analyze images but synthesize multi-modal clinical data streams—including electronic health records and lab results—to generate diagnostic hypotheses with probabilistic confidence scoring [96,97].
Predictive Clinical Engines: Through continuous learning from population-scale data, autonomous AI models correlate subtle sonographic biomarkers with longitudinal outcomes. A prototypical application is the automated prediction of hepatocellular carcinoma progression risk by analyzing contrast-enhanced ultrasound kinetics in cirrhotic patients, enabling preemptive therapeutic interventions. This approach significantly improves diagnostic efficiency while maintaining audit trails for clinical validation [98].
Self-Optimizing Operational Frameworks: The integration of AI with advanced ultrasound modalities (elastography, microvascular imaging) has birthed self-calibrating systems [26,30]. Deep reinforcement learning algorithms now autonomously adjust transducer frequencies (2–18 MHz) and compression settings based on real-time tissue feedback, achieving 41% faster protocol optimization compared to manual adjustments in abdominal imaging. These systems demonstrate emergent capabilities in error anticipation, such as detecting probe misalignment through speckle pattern analysis and guiding repositioning via augmented reality overlays [99].
Autonomous Quality Assurance: Embedded within PACS infrastructure, self-monitoring AI modules perform cross-modal consistency checks between ultrasound images and structured reports. Natural language processing algorithms automatically flag discrepancies (e.g., “hypoechoic mass” described without size measurements), while computer vision models verify anatomical labeling accuracy through spatial–semantic mapping. This closed-loop quality control reduces reporting errors by 63% in multicenter trials, establishing new standards for diagnostic reliability [100].

3.3. AI-Enhanced Educational Technologies

The integration of AI into ultrasound medicine is transforming not only clinical diagnostics but also medical education [101]. As a distinct technological paradigm, AI-enhanced educational systems are redefining competency development through three interconnected pillars:

3.3.1. Intelligent Simulation Platforms

AI-driven virtual reality systems integrate multi-physics haptic feedback with adaptive scenario generation. Convolutional neural networks analyze trainees’ probe manipulation trajectories in real time, dynamically adjusting virtual tissue elasticity and acoustic shadowing patterns to match individual skill levels. These systems enable the risk-free practice of complex clinical scenarios, from obstetric emergencies to contrast-enhanced echocardiography, with spatial–semantic mapping algorithms ensuring anatomical fidelity in simulated structures [102].

3.3.2. Competency Assessment and Adaptive Learning Systems

Deep learning frameworks establish a closed-loop educational ecosystem that dynamically evaluates and enhances trainee competencies. By analyzing probe navigation patterns through 3D motion tracking and speckle decorrelation analytics, these systems objectively quantify scanning efficiency while flagging suboptimal trajectories that compromise diagnostic accuracy. Reinforcement learning models further decode image optimization strategies by modeling expert consensus on gain adjustment sequences and depth selection logic, generating actionable feedback for technical refinement.
This assessment paradigm seamlessly integrates with adaptive learning architectures. Neural networks synthesize individualized knowledge gaps from simulation histories and population-level competency analytics, automatically curating microlearning modules that target specific weaknesses—such as cardiac valve assessment or contrast-enhanced protocol optimization. Continuous integration of emerging clinical guidelines ensures educational content evolves in tandem with ultrasound practice standards. A prototypical implementation at Johns Hopkins SOM demonstrates this synergy, where AI-driven assessment-to-intervention cycles significantly accelerate skill maturation [103,104].

3.3.3. Educational Workflow Integration

AI-enhanced educational workflows (Figure 5) harmonize human expertise with autonomous systems through two complementary modes. AI-assisted tools provide real-time guidance in training simulations—such as dynamically annotating anatomical landmarks and optimizing probe trajectories—while maintaining clinician oversight for critical decisions. In parallel, AI-driven systems automate competency validation by analyzing integrated performance data (image acquisition, optimization, diagnostic reasoning) against expert-curated benchmarks.
The University of Toronto’s hybrid model exemplifies this integration, combining AI-guided virtual scenarios (e.g., emergency cardiac imaging simulations) with autonomous skill assessments. This balanced approach achieves high resident engagement by preserving educational autonomy while leveraging AI’s consistency in repetitive skill evaluation [105].

4. Future Development Trends

The rapid evolution of AI in ultrasound medicine is reshaping healthcare delivery, with emerging trends poised to address longstanding challenges in accessibility, accuracy, and efficiency. These advancements are not merely technological upgrades but represent a fundamental shift toward patient-centric, data-driven healthcare ecosystems. By exploring these dimensions in depth, we aim to provide a holistic perspective on how AI will redefine ultrasound medicine in the coming decade.

4.1. Technological Integration and Innovation

The integration of AI with ultrasound medicine is expected to deepen considerably. Emerging hybrid architectures now enable ultrasound systems to dynamically adapt imaging parameters based on real-time tissue feedback, significantly improving signal-to-noise ratios in challenging anatomical regions. As technologies such as big data, cloud computing, and the internet of things continue to advance, AI will be capable of processing increasingly complex and diverse ultrasound image data, thereby enhancing diagnostic accuracy and efficiency [106]. For instance, self-supervised learning frameworks are overcoming data scarcity challenges by generating synthetic training datasets that preserve pathological features while ensuring patient privacy.
Furthermore, AI will converge with other medical imaging modalities, such as CT and MRI, to create multimodal imaging diagnostic systems that offer patients more comprehensive and precise diagnostic services [107]. This convergence is particularly impactful in oncology, where fused AI-enhanced imaging allows clinicians to map tumor microenvironments with unprecedented spatial-temporal resolution.
Big Data and Machine Learning: The use of big data in ultrasound medicine will enable AI algorithms to learn from vast repositories of patient data, improving the accuracy of diagnostic models. Cross-institutional data sharing protocols are now enabling AI models to generalize across diverse patient populations and imaging devices.
Cloud Computing: Cloud-based platforms will facilitate the storage and processing of ultrasound data, enabling faster and more efficient analysis. Edge-cloud hybrid architectures are emerging to balance real-time processing needs with centralized model refinement, particularly for longitudinal disease monitoring.
IoT and Wearable Devices: The integration of IoT with ultrasound devices will enable real-time monitoring and remote diagnostics. Novel flexible transducer arrays coupled with AI-driven signal processing are redefining continuous physiological monitoring, from fetal heart rate tracking to musculoskeletal rehabilitation assessment.
Multimodal Imaging: AI will play a crucial role in integrating ultrasound with other imaging modalities such as CT and MRI. Neural style transfer techniques are being employed to harmonize imaging features across modalities, reducing interpretation discrepancies in multi-scanner environments.

4.2. Standardization and Normalization

To address the growing demands in healthcare, the standardization and normalization of ultrasound medicine will emerge as a significant trend. International consortia are developing AI-powered quality control systems that automatically audit examination protocols, ensuring adherence to global best practices across all skill levels. AI technology will play a pivotal role in this evolution. Its application can optimize ultrasound examination workflows, facilitating the standardization and automation of the examination process. Intelligent scanning assistants now provide real-time feedback on probe positioning and imaging plane selection, effectively democratizing expert-level acquisition techniques.
By establishing uniform operational procedures and examination standards, AI can reduce the influence of human factors on diagnostic outcomes, thereby enhancing the normative consistency of diagnoses [108]. These systems are particularly transformative in emergency medicine, where protocol-driven AI guidance helps maintain diagnostic rigor during time-sensitive interventions. Additionally, AI can intelligently allocate diagnostic resources based on individual patient circumstances, ensuring the efficient utilization of limited medical resources [109,110,111]. Adaptive scheduling algorithms now prioritize urgent cases while dynamically redistributing imaging workloads across networked devices.

4.3. Personalized Treatment and Intelligent Management

AI technology is driving advancements in ultrasound medicine toward personalized treatment and intelligent management. Deep phenotyping approaches are correlating ultrasound biomarkers with multi-omics data to create individualized disease progression models, fundamentally altering chronic disease management paradigms. AI offers a competitive advantage through enhanced patient experiences, improved outcomes, early diagnosis, augmented clinician capabilities, increased operational efficiency, and greater accessibility to medical services [112].
Specifically, by analyzing the ultrasonic image characteristics of patients, AI systems can generate personalized treatment plans and monitor treatment effects in real time, allowing for timely adjustments to therapeutic strategies [113]. In rheumatology, AI-powered ultrasound tracking systems are enabling precision-guided drug titration by quantifying subtle changes in synovial vascularity and effusion volumes. Furthermore, AI technology facilitates intelligent management across various stages, including diagnosis, treatment, and follow-up, thereby enhancing the efficiency and quality of healthcare services. Integrated care platforms now automatically synthesize ultrasound findings with electronic health records to generate context-aware clinical decision trees, reducing cognitive load during complex case evaluations.

4.4. Telemedicine and Smart Healthcare

With the ongoing advancements and widespread adoption of AI technology, telemedicine and smart healthcare are poised to become significant directions for the future development of medical services. AI-mediated ultrasound compression algorithms now enable diagnostic-grade image transmission at bandwidths compatible with rural cellular networks, breaking critical infrastructure barriers. The integration of AI with ultrasound is making remote ultrasound diagnostics more convenient and efficient [114]. Collaborative AI systems allow distributed expert teams to jointly annotate ultrasound studies in virtual reading rooms, creating new paradigms for global specialist consortia.
Remote examination and diagnosis by expert physicians for patients in underserved areas not only addresses the uneven distribution of medical resources but also provides higher-quality medical services to these populations. Adaptive user interfaces automatically simplify examination protocols for community health workers while maintaining diagnostic validity through embedded quality assurance checks. The incorporation of AI technology has made mobile ultrasound devices increasingly intelligent and portable, enabling rapid examinations in diverse settings and the transmission of data to the cloud for further analysis. These portable systems now incorporate environmental adaptation algorithms that compensate for suboptimal scanning conditions, from uneven patient positioning to acoustic interference in field deployments.
Moreover, the development of intelligent healthcare systems will further promote the optimized allocation of medical resources and the intelligent upgrading of medical services [115,116]. Predictive maintenance AI models are extending ultrasound device lifespans in resource-constrained settings by anticipating component failures and guiding targeted repairs.
The future of AI in ultrasound medicine is promising, with technological integration, standardization, personalized treatment, and telemedicine leading the way. These advancements are creating synergistic effects—standardized protocols enhance telemedicine reliability, while personalized AI models leverage multimodal data from integrated systems. By embracing these advancements, the healthcare industry can harness the full potential of AI to enhance diagnostic accuracy, improve patient outcomes, and revolutionize the delivery of medical services. Critical challenges remain in establishing ethical governance frameworks and ensuring equitable access to these technologies, particularly for aging populations and developing regions. The flowchart provided offers a clear visual representation of these trends, underscoring the interconnected nature of AI and emerging technologies in shaping the future of ultrasound medicine (Figure 6). Continuous research and collaboration will be essential to overcome challenges and ensure that these innovations are accessible and beneficial to all.

5. Challenges and Limitations

While AI holds immense potential in revolutionizing the field of medical diagnostics, its implementation faces significant challenges and limitations. One of the primary obstacles is the regulatory and ethical barriers. The lack of a unified regulatory framework across different regions poses challenges for implementing AI technologies uniformly. For instance, the European Union’s General Data Protection Regulation (GDPR) imposes stringent requirements on data privacy, which may hinder the collection and utilization of medical data necessary for AI applications [117,118]. Moreover, ensuring the transparency and interpretability of AI algorithms remains a critical ethical concern, as it directly impacts public trust in AI systems.
Another significant issue is data privacy and security. In the medical domain, patient confidentiality is paramount. The deployment of AI for diagnostic purposes necessitates the processing of vast amounts of sensitive patient data, requiring a delicate balance between data utility and privacy protection [119]. While technological measures such as data encryption and anonymization can mitigate these risks, the potential for data breaches or misuse remains a persistent concern. Robust legal frameworks and institutional policies are essential to address these challenges comprehensively.
Additionally, algorithmic bias and liability issues pose further ethical dilemmas. AI algorithms, trained on datasets that may reflect inherent biases, risk perpetuating disparities and producing discriminatory outcomes [120]. Such biases could lead to unfair diagnostic results in healthcare, exacerbating existing inequalities. Furthermore, the question of accountability remains unresolved: when AI systems generate erroneous diagnoses, there is currently no standardized legal framework to establish liability, leaving a critical gap in addressing potential consequences.
Therefore, while AI offers groundbreaking opportunities for advancing ultrasound medicine and medical diagnostics, its implementation is fraught with challenges related to regulation, ethics, data privacy, and algorithmic fairness. Addressing these issues requires concurrent advancements in policy development, legal frameworks, and ethical guidelines to ensure the safe, compliant, and equitable deployment of AI technologies in healthcare settings.

6. Conclusions

AI offers numerous advantages, including objectivity, repeatability, speed, and accuracy. As a rapidly evolving field, AI holds great potential across various healthcare domains, particularly in radiology [121]. The integration of AI with ultrasound medicine presents unprecedented opportunities and challenges within medical diagnostics. Advances in automated image analysis and recognition, intelligent decision support systems, optimization of examination processes, resource management, and remote ultrasound diagnostics and education are gradually transforming traditional diagnostic and therapeutic paradigms in ultrasound medicine, thereby enhancing both the accuracy and efficiency of diagnoses.
However, it is important to note that there is currently no comprehensive meta-analysis addressing the diagnostic accuracy of AI performance within the medical profession. Rigorous evaluations and independent assessments of this technology remain in their infancy [122]. Furthermore, it is essential that AI also provides additional benefits, including increased speed, efficiency, cost-effectiveness, enhanced accessibility, and the maintenance of ethical standards [123,124,125].
Future research should focus on several key areas to address the existing gaps. First and foremost, systematic evaluation and meta-analysis of AI’s diagnostic accuracy in ultrasound imaging are essential to comprehensively synthesize existing evidence and validate the effectiveness and consistency of AI across different clinical scenarios. Additionally, further exploration is needed to leverage AI in the multi-modal application of ultrasound imaging, such as integrating 2D, 3D, and Doppler imaging, to optimize diagnostic workflows and improve detection precision. Another critical area of investigation involves the integration of AI technologies into real-time diagnostic support systems for ultrasound medicine, aiming to enhance both diagnostic efficiency and clinical decision-making capabilities. Furthermore, the standardization of AI applications in ultrasound diagnostics, cross-center validation, and ethical safety assessments require more in-depth research to ensure robust and reliable clinical implementation. Through systematic investigation and technological advancements, the deep integration of AI and ultrasound medicine will provide more comprehensive solutions for clinical diagnostics.
Consequently, the implementation of AI technology in ultrasound and its comprehensive application in clinical practice still has considerable advancements to make. Nonetheless, we firmly believe that with the continuous development and refinement of technology, the integration of AI and ultrasound medicine will become increasingly profound and widespread, ultimately contributing significantly to global health.

Author Contributions

Conceptualization and design: X.Z. and K.Z.; original draft preparation: L.Y.; literature collection and organization: Q.L. and K.F.; writing—review and editing: K.Z and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key Research and Development Project of Shaanxi Province (2023-YBSF-121).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bahl, M. Combining AI and Radiomics to Improve the Accuracy of Breast US. Radiology 2024, 312, e241795. [Google Scholar] [CrossRef] [PubMed]
  2. Sanjeevi, G.; Gopalakrishnan, U.; Parthinarupothi, R.K.; Madathil, T. Deep learning supported echocardiogram analysis: A comprehensive review. Artif. Intell. Med. 2024, 151, 102866. [Google Scholar] [CrossRef]
  3. Akkus, Z.; Cai, J.; Boonrod, A.; Zeinoddini, A.; Weston, A.D.; Philbrick, K.A.; Erickson, B.J. A Survey of Deep-Learning Applications in Ultrasound: Artificial Intelligence-Powered Ultrasound for Improving Clinical Workflow. J. Am. Coll. Radiol. 2019, 16, 1318–1328. [Google Scholar] [CrossRef]
  4. Getzmann, J.M.; Zantonelli, G.; Messina, C.; Albano, D.; Serpi, F.; Gitto, S.; Sconfienza, L.M. The use of artificial intelligence in musculoskeletal ultrasound: A systematic review of the literature. Radiol. Med. 2024, 129, 1405–1411. [Google Scholar] [CrossRef]
  5. Loppini, M.; Gambaro, F.M.; Chiappetta, K.; Grappiolo, G.; Bianchi, A.M.; Corino, V.D.A. Automatic Identification of Failure in Hip Replacement: An Artificial Intelligence Approach. Bioengineering 2022, 9, 288. [Google Scholar] [CrossRef] [PubMed]
  6. Aruleba, R.T.; Adekiya, T.A.; Ayawei, N.; Obaido, G.; Aruleba, K.; Mienye, I.D.; Aruleba, I.; Ogbuokiri, B. COVID-19 Diagnosis: A Review of Rapid Antigen, RT-PCR and Artificial Intelligence Methods. Bioengineering 2022, 9, 153. [Google Scholar] [CrossRef] [PubMed]
  7. Roussel, E.; Capitanio, U.; Kutikov, A.; Oosterwijk, E.; Pedrosa, I.; Rowe, S.P.; Gorin, M.A. Novel Imaging Methods for Renal Mass Characterization: A Collaborative Review. Eur. Urol. 2022, 81, 476–488. [Google Scholar] [CrossRef]
  8. Xu, W.; Jia, X.; Mei, Z.; Gu, X.; Lu, Y.; Fu, C.C.; Zhang, R.; Gu, Y.; Chen, X.; Luo, X.; et al. Generalizability and Diagnostic Performance of AI Models for Thyroid US. Radiology 2023, 307, e221157. [Google Scholar] [CrossRef]
  9. Gandhi, S.; Mosleh, W.; Shen, J.; Chow, C.M. Automation, machine learning, and artificial intelligence in echocardiography: A brave new world. Echocardiography 2018, 35, 1402–1418. [Google Scholar] [CrossRef]
  10. Goldenberg, S.L.; Nir, G.; Salcudean, S.E. A new era: Artificial intelligence and machine learning in prostate cancer. Nat. Rev. Urol. 2019, 16, 391–403. [Google Scholar] [CrossRef]
  11. Zegarra, R.R.; Ghi, T. Use of artificial intelligence and deep learning in fetal ultrasound imaging. Ultrasound Obstet. Gynecol. 2023, 62, 185–194. [Google Scholar] [CrossRef] [PubMed]
  12. Cheng, P.M.; Montagnon, E.; Yamashita, R.; Pan, I.; Cadrin-Chênevert, A.; Romero, F.P.; Chartrand, G.; Kadoury, S.; Tang, A. Deep Learning: An Update for Radiologists. Radiographics 2021, 41, 1427–1445. [Google Scholar] [CrossRef] [PubMed]
  13. Shen, Y.T.; Chen, L.; Yue, W.W.; Xu, H.X. Artificial intelligence in ultrasound. Eur. J. Radiol. 2021, 139, 109717. [Google Scholar] [CrossRef]
  14. Hernandez, K.A.L.; Rienmüller, T.; Baumgartner, D.; Baumgartner, C. Deep learning in spatiotemporal cardiac imaging: A review of methodologies and clinical usability. Comput. Biol. Med. 2021, 130, 104200. [Google Scholar] [CrossRef] [PubMed]
  15. Yi, J.; Kang, H.K.; Kwon, J.H.; Kim, K.S.; Park, M.H.; Seong, Y.K.; Kim, D.W.; Ahn, B.; Ha, K.; Lee, J.; et al. Technology trends and applications of deep learning in ultrasonography: Image quality enhancement, diagnostic support, and improving workflow efficiency. Ultrasonography 2021, 40, 7–22. [Google Scholar] [CrossRef]
  16. Mitrea, D.A.; Brehar, R.; Nedevschi, S.; Lupsor-Platon, M.; Socaciu, M.; Badea, R. Hepatocellular Carcinoma Recognition from Ultrasound Images Using Combinations of Conventional and Deep Learning Techniques. Sensors 2023, 23, 2520. [Google Scholar] [CrossRef]
  17. Cao, C.L.; Li, Q.L.; Tong, J.; Shi, L.N.; Li, W.X.; Xu, Y.; Cheng, J.; Du, T.T.; Li, J.; Cui, X.W. Artificial intelligence in thyroid ultrasound. Front. Oncol. 2023, 13, 1060702. [Google Scholar] [CrossRef]
  18. Barinov, L.; Jairaj, A.; Middleton, W.D.; Beland, D.M.; Kirsch, J.; Filice, R.W.; Reverter, J.L.; Arguelles, I.; Grant, E.G. Improving the Efficacy of ACR TI-RADS Through Deep Learning-Based Descriptor Augmentation. J. Digit. Imaging 2023, 36, 2392–2401. [Google Scholar] [CrossRef]
  19. Etehadtavakol, M.; Etehadtavakol, M.; Ng, E.Y.K. Enhanced thyroid nodule segmentation through U-Net and VGG16 fusion with feature engineering: A comprehensive study. Comput. Methods Programs Biomed. 2024, 251, 108209. [Google Scholar] [CrossRef]
  20. Tessler, F.N.; Thomas, J. Artificial Intelligence for Evaluation of Thyroid Nodules: A Primer. Thyroid 2023, 33, 150–158. [Google Scholar] [CrossRef]
  21. Chang, C.Y.; Lei, Y.F.; Tseng, C.H.; Shih, S.R. Thyroid segmentation and volume estimation in ultrasound images. IEEE Trans. Biomed. Eng. 2010, 57, 1348–1357. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, J.; You, H.; Li, K. A review of thyroid gland segmentation and thyroid nodule segmentation methods for medical ultrasound images. Comput. Methods Programs Biomed. 2020, 185, 105329. [Google Scholar] [CrossRef] [PubMed]
  23. Kumar, V.; Webb, J.; Gregory, A.; Meixner, D.D.; Knudsen, J.M.; Callstrom, M.; Fatemi, M.; Alizad, A. Automated Segmentation of Thyroid Nodule, Gland, and Cystic Components From Ultrasound Images Using Deep Learning. IEEE Access 2020, 8, 63482–63496. [Google Scholar] [CrossRef]
  24. Narayan, N.S.; Marziliano, P.; Kanagalingam, J.; Hobbs, C.G. Speckle Patch Similarity for Echogenicity-Based Multiorgan Segmentation in Ultrasound Images of the Thyroid Gland. IEEE J. Biomed. Health Inform. 2017, 21, 172–183. [Google Scholar] [CrossRef] [PubMed]
  25. Poudel, P.; Illanes, A.; Sheet, D.; Friebe, M. Evaluation of Commonly Used Algorithms for Thyroid Ultrasound Images Segmentation and Improvement Using Machine Learning Approaches. J. Healthc. Eng. 2018, 2018, 8087624. [Google Scholar] [CrossRef]
  26. Bojunga, J.; Trimboli, P. Thyroid ultrasound and its ancillary techniques. Rev. Endocr. Metab. Disord. 2024, 25, 161–173. [Google Scholar] [CrossRef]
  27. Peng, S.; Liu, Y.; Lv, W.; Liu, L.; Zhou, Q.; Yang, H.; Ren, J.; Liu, G.; Wang, X.; Zhang, X.; et al. Deep learning-based artificial intelligence model to assist thyroid nodule diagnosis and management: A multicentre diagnostic study. Lancet Digit. Health 2021, 3, e250–e259. [Google Scholar] [CrossRef]
  28. Wang, B.; Wan, Z.; Li, C.; Zhang, M.; Shi, Y.; Miao, X.; Jian, Y.; Luo, Y.; Yao, J.; Tian, W. Identification of benign and malignant thyroid nodules based on dynamic AI ultrasound intelligent auxiliary diagnosis system. Front. Endocrinol. 2022, 13, 1018321. [Google Scholar] [CrossRef]
  29. Zhou, T.; Xu, L.; Shi, J.; Zhang, Y.; Lin, X.; Wang, Y.; Hu, T.; Xu, R.; Xie, L.; Sun, L.; et al. US of thyroid nodules: Can AI-assisted diagnostic system compete with fine needle aspiration? Eur. Radiol. 2024, 34, 1324–1333. [Google Scholar] [CrossRef]
  30. Toro-Tobon, D.; Loor-Torres, R.; Duran, M.; Fan, J.W.; Ospina, N.S.; Wu, Y.; Brito, J.P. Artificial Intelligence in Thyroidology: A Narrative Review of the Current Applications, Associated Challenges, and Future Directions. Thyroid 2023, 33, 903–917. [Google Scholar] [CrossRef]
  31. Din, N.M.U.; Dar, R.A.; Rasool, M.; Assad, A. Breast cancer detection using deep learning: Datasets, methods, and challenges ahead. Comput. Biol. Med. 2022, 149, 106073. [Google Scholar] [CrossRef] [PubMed]
  32. Li, J.; Bu, Y.; Lu, S.; Pang, H.; Luo, C.; Liu, Y.; Qian, L. Development of a Deep Learning-Based Model for Diagnosing Breast Nodules with Ultrasound. J. Ultrasound Med. 2021, 40, 513–520. [Google Scholar] [CrossRef]
  33. Dong, F.; She, R.; Cui, C.; Shi, S.; Hu, X.; Zeng, J.; Wu, H.; Xu, J.; Zhang, Y. One step further into the blackbox: A pilot study of how to build more confidence around an AI-based decision system of breast nodule assessment in 2D ultrasound. Eur. Radiol. 2021, 31, 4991–5000. [Google Scholar] [CrossRef] [PubMed]
  34. Jia, Y.; Wu, R.; Lu, X.; Duan, Y.; Zhu, Y.; Ma, Y.; Nie, F. Deep Learning with Transformer or Convolutional Neural Network in the Assessment of Tumor-Infiltrating Lymphocytes (TILs) in Breast Cancer Based on US Images: A Dual-Center Retrospective Study. Cancers 2023, 15, 838. [Google Scholar] [CrossRef]
  35. Zhang, J.; Wu, J.; Zhou, X.S.; Shi, F.; Shen, D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin. Cancer Biol. 2023, 96, 11–25. [Google Scholar] [CrossRef]
  36. Yao, Z.; Luo, T.; Dong, Y.; Jia, X.; Deng, Y.; Wu, G.; Zhu, Y.; Zhang, J.; Liu, J.; Yang, L.; et al. Virtual elastography ultrasound via generative adversarial network for breast cancer diagnosis. Nat. Commun. 2023, 14, 788. [Google Scholar] [CrossRef] [PubMed]
  37. Ansari, M.Y.; Qaraqe, M.; Righetti, R.; Serpedin, E.; Qaraqe, K. Unveiling the future of breast cancer assessment: A critical review on generative adversarial networks in elastography ultrasound. Front. Oncol. 2023, 13, 1282536. [Google Scholar] [CrossRef]
  38. Bitencourt, A.; Naranjo, I.D.; Gullo, R.L.; Saccarelli, C.R.; Pinker, K. AI-enhanced breast imaging: Where are we and where are we heading? Eur. J. Radiol. 2021, 142, 109882. [Google Scholar] [CrossRef]
  39. Fujioka, T.; Kubota, K.; Mori, M.; Katsuta, L.; Kikuchi, Y.; Kimura, K.; Kimura, M.; Adachi, M.; Oda, G.; Nakagawa, T.; et al. Virtual Interpolation Images of Tumor Development and Growth on Breast Ultrasound Image Synthesis with Deep Convolutional Generative Adversarial Networks. J. Ultrasound Med. 2021, 40, 61–69. [Google Scholar] [CrossRef]
  40. Li, X.; Wang, Y.; Zhao, Y.; Wei, Y. Fast Speckle Noise Suppression Algorithm in Breast Ultrasound Image Using Three-Dimensional Deep Learning. Front. Physiol. 2022, 13, 880966. [Google Scholar] [CrossRef]
  41. Pang, T.; Wong, J.H.D.; Ng, W.L.; Chan, C.S. Semi-supervised GAN-based Radiomics Model for Data Augmentation in Breast Ultrasound Mass Classification. Comput. Methods Programs Biomed. 2021, 203, 106018. [Google Scholar] [CrossRef] [PubMed]
  42. Vimala, B.B.; Srinivasan, S.; Mathivanan, S.K.; Muthukumaran, V.; Babu, J.C.; Herencsar, N.; Vilcekova, L. Image Noise Removal in Ultrasound Breast Images Based on Hybrid Deep Learning Technique. Sensors 2023, 23, 1167. [Google Scholar] [CrossRef]
  43. Guo, Y.; Chen, M.; Yang, L.; Yin, H.; Yang, H.; Zhou, Y. A neural network with a human learning paradigm for breast fibroadenoma segmentation in sonography. Biomed. Eng. Online 2024, 23, 5. [Google Scholar] [CrossRef]
  44. He, Q.; Yang, Q.; Xie, M. HCTNet: A hybrid CNN-transformer network for breast ultrasound image segmentation. Comput. Biol. Med. 2023, 155, 106629. [Google Scholar] [CrossRef]
  45. Huang, Q.; Luo, Y.; Zhang, Q. Breast ultrasound image segmentation: A survey. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 493–507. [Google Scholar] [CrossRef]
  46. Hussain, S.; Xi, X.; Ullah, I.; Inam, S.A.; Naz, F.; Shaheed, K.; Ali, S.A.; Tian, C. A Discriminative Level Set Method with Deep Supervision for Breast Tumor Segmentation. Comput. Biol. Med. 2022, 149, 105995. [Google Scholar] [CrossRef] [PubMed]
  47. Lei, Y.; He, X.; Yao, J.; Wang, T.; Wang, L.; Li, W.; Curran, W.J.; Liu, T.; Xu, D.; Yang, X. Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R-CNN. Med. Phys. 2021, 48, 204–214. [Google Scholar] [CrossRef] [PubMed]
  48. Tagnamas, J.; Ramadan, H.; Yahyaouy, A.; Tairi, H. Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images. Vis. Comput. Ind. Biomed. Art 2024, 7, 2. [Google Scholar] [CrossRef]
  49. Vakanski, A.; Xian, M.; Freer, P.E. Attention-Enriched Deep Learning Model for Breast Tumor Segmentation in Ultrasound Images. Ultrasound Med. Biol. 2020, 46, 2819–2833. [Google Scholar] [CrossRef]
  50. Wu, R.; Lu, X.; Yao, Z.; Ma, Y. MFMSNet: A Multi-frequency and Multi-scale Interactive CNN-Transformer Hybrid Network for breast ultrasound image segmentation. Comput. Biol. Med. 2024, 177, 108616. [Google Scholar] [CrossRef]
  51. Magnuska, Z.A.; Roy, R.; Palmowski, M.; Kohlen, M.; Winkler, B.S.; Pfeil, T.; Boor, P.; Schulz, V.; Krauss, K.; Stickeler, E.; et al. Combining Radiomics and Autoencoders to Distinguish Benign and Malignant Breast Tumors on US Images. Radiology 2024, 312, e232554. [Google Scholar] [CrossRef]
  52. Nedadur, R.; Wang, B.; Tsang, W. Artificial intelligence for the echocardiographic assessment of valvular heart disease. Heart 2022, 108, 1592–1599. [Google Scholar] [CrossRef] [PubMed]
  53. Siqueira, V.S.D.; Borges, M.M.; Furtado, R.G.; Dourado, C.N.; Costa, R.M.D. Artificial intelligence applied to support medical decisions for the automatic analysis of echocardiogram images: A systematic review. Artif. Intell. Med. 2021, 120, 102165. [Google Scholar] [CrossRef] [PubMed]
  54. Siqueira, V.S.D.; Rodrigues, D.D.C.; Dourado, C.N.; Borges, M.M.; Costa, R.M.D. Machine Learning Applied to Support Medical Decision in Transthoracic Echocardiogram Exams: A Systematic Review. In Proceedings of the 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 13–17 July 2020. [Google Scholar]
  55. Zamzmi, G.; Hsu, L.Y.; Li, W.; Sachdev, V.; Antani, S. Harnessing Machine Intelligence in Automatic Echocardiogram Analysis: Current Status, Limitations, and Future Directions. IEEE Rev. Biomed. Eng. 2020, 14, 181–203. [Google Scholar] [CrossRef] [PubMed]
  56. Föllmer, B.; Williams, M.C.; Dey, D.; Arbab-Zadeh, A.; Maurovich-Horvat, P.; Volleberg, R.; Rueckert, D.; Schnabel, J.A.; Newby, D.E.; Dweck, M.R.; et al. Roadmap on the use of artificial intelligence for imaging of vulnerable atherosclerotic plaque in coronary arteries. Nat. Rev. Cardiol. 2024, 21, 51–64. [Google Scholar] [CrossRef]
  57. Sengupta, P.P.; Dey, D.; Davies, R.H.; Duchateau, N.; Yanamala, N. Challenges for augmenting intelligence in cardiac imaging. Lancet Digit. Health 2024, 6, e739–e748. [Google Scholar] [CrossRef]
  58. Schattenberg, J.M.; Chalasani, N.; Alkhouri, N. Artificial Intelligence Applications in Hepatology. Clin. Gastroenterol. Hepatol. 2023, 21, 2015–2025. [Google Scholar] [CrossRef]
  59. Balsano, C.; Burra, P.; Duvoux, C.; Alisi, A.; Piscaglia, F.; Gerussi, A.; Special Interest Group (SIG) Artificial Intelligence; Liver, D.; Italian Association for the Study of Liver (AISF). Artificial Intelligence and liver: Opportunities and barriers. Dig. Liver Dis. 2023, 55, 1455–1461. [Google Scholar] [CrossRef]
  60. Zamanian, H.; Mostaar, A.; Azadeh, P.; Ahmadi, M. Implementation of Combinational Deep Learning Algorithm for Non-alcoholic Fatty Liver Classification in Ultrasound Images. J. Biomed. Phys. Eng. 2021, 11, 73–84. [Google Scholar] [CrossRef]
  61. Yang, Q.; Wei, J.; Hao, X.; Kong, D.; Yu, X.; Jiang, T.; Xi, J.; Cai, W.; Luo, Y.; Jing, X.; et al. Improving B-mode ultrasound diagnostic performance for focal liver lesions using deep learning: A multicentre study. EBioMedicine 2020, 56, 102777. [Google Scholar] [CrossRef]
  62. Uche-Anya, E.; Anyane-Yeboa, A.; Berzin, T.M.; Ghassemi, M.; May, F.P. Artificial intelligence in gastroenterology and hepatology: How to advance clinical practice while ensuring health equity. Gut 2022, 71, 1909–1915. [Google Scholar] [CrossRef]
  63. Shrestha, P.; Poudyal, B.; Yadollahi, S.; Wright, D.E.; Gregory, A.V.; Warner, J.D.; Korfiatis, P.; Green, I.C.; Rassier, S.L.; Mariani, A.; et al. A systematic review on the use of artificial intelligence in gynecologic imaging–Background, state of the art, and future directions. Gynecol. Oncol. 2022, 166, 596–605. [Google Scholar] [CrossRef] [PubMed]
  64. Dhombres, F.; Bonnard, J.; Bailly, K.; Maurice, P.; Papageorghiou, A.T.; Jouannic, J.M. Contributions of Artificial Intelligence Reported in Obstetrics and Gynecology Journals: Systematic Review. J. Med. Internet Res. 2022, 24, e35465. [Google Scholar] [CrossRef]
  65. Akazawa, M.; Hashimoto, K. Artificial intelligence in gynecologic cancers: Current status and future challenges–A systematic review. Artif. Intell. Med. 2021, 120, 102164. [Google Scholar] [CrossRef]
  66. Jost, E.; Kosian, P.; Cruz, J.J.; Albarqouni, S.; Gembruch, U.; Strizek, B.; Recker, F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J. Clin. Med. 2023, 12, 6833. [Google Scholar] [CrossRef] [PubMed]
  67. Sharma, H.; Droste, R.; Chatelain, P.; Drukker, L.; Papageorghiou, A.T.; Noble, J.A. Spatio-Temporal Partitioning and Description of Full-Length Routine Fetal Anomaly Ultrasound Scans. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 987–990. [Google Scholar] [CrossRef]
  68. Sharma, H.; Drukker, L.; Chatelain, P.; Droste, R.; Papageorghiou, A.T.; Noble, J.A. Knowledge representation and learning of operator clinical workflow from full-length routine fetal ultrasound scan videos. Med. Image Anal. 2021, 69, 101973. [Google Scholar] [CrossRef]
  69. Drukker, L.; Sharma, H.; Karim, J.N.; Droste, R.; Noble, J.A.; Papageorghiou, A.T. Clinical workflow of sonographers performing fetal anomaly ultrasound scans: Deep-learning-based analysis. Ultrasound Obstet. Gynecol. 2022, 60, 759–765. [Google Scholar] [CrossRef] [PubMed]
  70. Zhao, H.; Zheng, Q.; Teng, C.; Yasrab, R.; Drukker, L.; Papageorghiou, A.T.; Noble, J.A. Memory-based unsupervised video clinical quality assessment with multi-modality data in fetal ultrasound. Med. Image Anal. 2023, 90, 102977. [Google Scholar] [CrossRef]
  71. Yasrab, R.; Fu, Z.; Zhao, H.; Lee, L.H.; Sharma, H.; Drukker, L.; Papageorgiou, A.T.; Noble, J.A. A Machine Learning Method for Automated Description and Workflow Analysis of First Trimester Ultrasound Scans. IEEE Trans. Med. Imaging 2023, 42, 1301–1313. [Google Scholar] [CrossRef]
  72. Shahrivar, R.Y.; Karami, F.; Karami, E. Enhancing Fetal Anomaly Detection in Ultrasonography Images: A Review of Machine Learning-Based Approaches. Biomimetics 2023, 8, 519. [Google Scholar] [CrossRef]
  73. Martínez-Más, J.; Bueno-Crespo, A.; Khazendar, S.; Remezal-Solano, M.; Martínez-Cendán, J.P.; Jassim, S.; Du, H.; Al Assam, H.; Bourne, T.; Timmerman, D. Evaluation of machine learning methods with Fourier Transform features for classifying ovarian tumors based on ultrasound images. PLoS ONE 2019, 14, e0219388. [Google Scholar] [CrossRef] [PubMed]
  74. Giourga, M.; Petropoulos, I.; Stavros, S.; Potiris, A.; Gerede, A.; Sapantzoglou, I.; Fanaki, M.; Papamattheou, E.; Karasmani, C.; Karampitsakos, T.; et al. Enhancing Ovarian Tumor Diagnosis: Performance of Convolutional Neural Networks in Classifying Ovarian Masses Using Ultrasound Images. J. Clin. Med. 2024, 13, 4123. [Google Scholar] [CrossRef]
  75. Christiansen, F.; Epstein, E.L.; Smedberg, E.; Åkerlund, M.; Smith, K.; Epstein, E. Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: Comparison with expert subjective assessment. Ultrasound Obstet. Gynecol. 2021, 57, 155–163. [Google Scholar] [CrossRef] [PubMed]
  76. He, X.; Bai, X.H.; Chen, H.; Feng, W.W. Machine learning models in evaluating the malignancy risk of ovarian tumors: A comparative study. J. Ovarian Res. 2024, 17, 219. [Google Scholar] [CrossRef]
  77. Drukker, L.; Noble, J.A.; Papageorghiou, A.T. Introduction to artificial intelligence in ultrasound imaging in obstetrics and gynecology. Ultrasound Obstet. Gynecol. 2020, 56, 498–505. [Google Scholar] [CrossRef] [PubMed]
  78. Liu, Y.; Zhou, Q.; Peng, B.; Jiang, J.; Fang, L.; Weng, W.; Wang, W.; Wang, S.; Zhu, X. Automatic Measurement of Endometrial Thickness from Transvaginal Ultrasound Images. Front. Bioeng. Biotechnol. 2022, 10, 853845. [Google Scholar] [CrossRef]
  79. Moro, F.; Albanese, M.; Boldrini, L.; Chiappa, V.; Lenkowicz, J.; Bertolina, F.; Mascilini, F.; Moroni, R.; Gambacorta, M.A.; Raspagliesi, F.; et al. Developing and validating ultrasound-based radiomics models for predicting high-risk endometrial cancer. Ultrasound Obstet. Gynecol. 2022, 60, 256–268. [Google Scholar] [CrossRef]
  80. Zhao, X.; Liu, M.; Wu, S.; Zhang, B.; Burjoo, A.; Yang, Y.; Xu, D. Artificial intelligence diagnosis of intrauterine adhesion by 3D ultrasound imaging: A prospective study. Quant. Imaging Med. Surg. 2023, 13, 2314–2327. [Google Scholar] [CrossRef]
  81. Szentimrey, Z.; Ameri, G.; Hong, C.X.; Cheung, R.Y.K.; Ukwatta, E.; Eltahawi, A. Automated segmentation and measurement of the female pelvic floor from the mid-sagittal plane of 3D ultrasound volumes. Med. Phys. 2023, 50, 6215–6227. [Google Scholar] [CrossRef]
  82. van den Noort, F.; Manzini, C.; van der Vaart, C.H.; van Limbeek, M.A.J.; Slump, C.H.; Grob, A.T.M. Automatic identification and segmentation of slice of minimal hiatal dimensions in transperineal ultrasound volumes. Ultrasound Obstet. Gynecol. 2022, 60, 570–576. [Google Scholar] [CrossRef]
  83. Williams, H.; Cattani, L.; Van Schoubroeck, D.; Yaqub, M.; Sudre, C.; Vercauteren, T.; D’Hooge, J.; Deprest, J. Automatic Extraction of Hiatal Dimensions in 3-D Transperineal Pelvic Ultrasound Recordings. Ultrasound Med. Biol. 2021, 47, 3470–3479. [Google Scholar] [CrossRef]
  84. Nhat, P.T.H.; Van Hao, N.; Tho, P.V.; Kerdegari, H.; Pisani, L.; Thu, L.N.M.; Phuong, L.T.; Duong, H.T.H.; Thuy, D.B.; McBride, A.; et al. Clinical benefit of AI-assisted lung ultrasound in a resource-limited intensive care unit. Crit. Care 2023, 27, 257. [Google Scholar] [CrossRef] [PubMed]
  85. Fiedler, H.C.; Prager, R.; Smith, D.; Wu, D.; Dave, C.; Tschirhart, J.; Wu, B.; Van Berlo, B.; Malthaner, R.; Arntfield, R. Automated Real-Time Detection of Lung Sliding Using Artificial Intelligence: A Prospective Diagnostic Accuracy Study. Chest 2024, 166, 362–370. [Google Scholar] [CrossRef] [PubMed]
  86. VanBerlo, B.; Wu, D.; Li, B.; Rahman, M.A.; Hogg, G.; VanBerlo, B.; Tschirhart, J.; Ford, A.; Ho, J.; McCauley, J.; et al. Accurate assessment of the lung sliding artefact on lung ultrasonography using a deep learning approach. Comput. Biol. Med. 2022, 148, 105953. [Google Scholar] [CrossRef] [PubMed]
  87. Wu, D.; Smith, D.; VanBerlo, B.; Roshankar, A.; Lee, H.; Li, B.; Ali, F.; Rahman, M.; Basmaji, J.; Tschirhart, J.; et al. Improving the Generalizability and Performance of an Ultrasound Deep Learning Model Using Limited Multicenter Data for Lung Sliding Artifact Identification. Diagnostics 2024, 14, 1081. [Google Scholar] [CrossRef]
  88. Yi, P.H.; Garner, H.W.; Hirschmann, A.; Jacobson, J.A.; Omoumi, P.; Oh, K.; Zech, J.R.; Lee, Y.H. Clinical Applications, Challenges, and Recommendations for Artificial Intelligence in Musculoskeletal and Soft-Tissue Ultrasound: AJR Expert Panel Narrative Review. AJR Am. J. Roentgenol. 2024, 222, e2329530. [Google Scholar] [CrossRef]
  89. Gitto, S.; Serpi, F.; Albano, D.; Risoleo, G.; Fusco, S.; Messina, C.; Sconfienza, L.M. AI applications in musculoskeletal imaging: A narrative review. Eur. Radiol. Exp. 2024, 8, 22. [Google Scholar] [CrossRef]
  90. Gorelik, N.; Chong, J.; Lin, D.J. Pattern Recognition in Musculoskeletal Imaging Using Artificial Intelligence. Semin. Musculoskelet. Radiol. 2020, 24, 38–49. [Google Scholar] [CrossRef]
  91. Bowness, J.S.; Metcalfe, D.; El-Boghdadly, K.; Thurley, N.; Morecroft, M.; Hartley, T.; Krawczyk, J.; Noble, J.A.; Higham, H. Artificial intelligence for ultrasound scanning in regional anaesthesia: A scoping review of the evidence from multiple disciplines. Br. J. Anaesth. 2024, 132, 1049–1062. [Google Scholar] [CrossRef]
  92. Hesse, L.S.; Aliasi, M.; Moser, F.; the INTERGROWTH-21st Consortium; Haak, M.C.; Xie, W.; Jenkinson, M.; Namburete, A.I.L. Subcortical segmentation of the fetal brain in 3D ultrasound using deep learning. Neuroimage 2022, 254, 119117. [Google Scholar] [CrossRef]
  93. Weichert, J.; Scharf, J.L. Advancements in Artificial Intelligence for Fetal Neurosonography: A Comprehensive Review. J. Clin. Med. 2024, 13, 5626. [Google Scholar] [CrossRef] [PubMed]
  94. Xie, B.; Lei, T.; Wang, N.; Cai, H.; Xian, J.; He, M.; Zhang, L.; Xie, H. Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1303–1312. [Google Scholar] [CrossRef] [PubMed]
  95. Xie, H.N.; Wang, N.; He, M.; Zhang, L.H.; Cai, H.M.; Xian, J.B.; Lin, M.F.; Zheng, J.; Yang, Y.Z. Using deep-learning algorithms to classify fetal brain ultrasound images as normal or abnormal. Ultrasound Obstet. Gynecol. 2020, 56, 579–587. [Google Scholar] [CrossRef] [PubMed]
  96. Jaremko, J.L.; Hareendranathan, A.; Bolouri, S.E.S.; Frey, R.F.; Dulai, S.; Bailey, A.L. AI aided workflow for hip dysplasia screening using ultrasound in primary care clinics. Sci. Rep. 2023, 13, 9224. [Google Scholar] [CrossRef]
  97. Ha, E.J.; Lee, J.H.; Lee, D.H.; Moon, J.; Lee, H.; Kim, Y.N.; Kim, M.; Na, D.G.; Kim, J.H. Artificial Intelligence Model Assisting Thyroid Nodule Diagnosis and Management: A Multicenter Diagnostic Study. J. Clin. Endocrinol. Metab. 2024, 109, 527–535. [Google Scholar] [CrossRef]
  98. Tong, W.J.; Wu, S.H.; Cheng, M.Q.; Huang, H.; Liang, J.Y.; Li, C.Q.; Guo, H.L.; He, D.N.; Liu, Y.H.; Xiao, H.; et al. Integration of Artificial Intelligence Decision Aids to Reduce Workload and Enhance Efficiency in Thyroid Nodule Management. JAMA Netw. Open 2023, 6, e2313674. [Google Scholar] [CrossRef]
  99. El Naqa, I.; Karolak, A.; Luo, Y.; Folio, L.; Tarhini, A.A.; Rollison, D.; Parodi, K. Translation of AI into oncology clinical practice. Oncogene 2023, 42, 3089–3097. [Google Scholar] [CrossRef]
  100. Rowe, S.P.; Soyer, P.; Fishman, E.K. The future of radiology: What if artificial intelligence is really as good as predicted? Diagn. Interv. Imaging 2022, 103, 385–386. [Google Scholar] [CrossRef]
  101. Daum, N.; Blaivas, M.; Goudie, A.; Hoffmann, B.; Jenssen, C.; Neubauer, R.; Recker, F.; Moga, T.V.; Zervides, C.; Dietrich, C.F. Student ultrasound education, current view and controversies. Role of Artificial Intelligence, Virtual Reality and telemedicine. Ultrasound J. 2024, 16, 44. [Google Scholar] [CrossRef]
  102. Wong, R.S.; Ming, L.C.; Ali, R.A.R. The Intersection of ChatGPT, Clinical Medicine, and Medical Education. JMIR Med. Educ. 2023, 9, e47274. [Google Scholar] [CrossRef]
  103. Nguyen, G.K.; Shetty, A.S. Artificial Intelligence and Machine Learning: Opportunities for Radiologists in Training. J. Am. Coll. Radiol. 2018, 15, 1320–1321. [Google Scholar] [CrossRef] [PubMed]
  104. Salastekar, N.V.; Maxfield, C.; Hanna, T.N.; Krupinski, E.A.; Heitkamp, D.; Grimm, L.J. Artificial Intelligence/Machine Learning Education in Radiology: Multi-institutional Survey of Radiology Residents in the United States. Acad. Radiol. 2023, 30, 1481–1487. [Google Scholar] [CrossRef] [PubMed]
  105. Shimizu, H.; Nakayama, K.I. Artificial intelligence in oncology. Cancer Sci. 2020, 111, 1452–1460. [Google Scholar] [CrossRef]
  106. Boeken, T.; Feydy, J.; Lecler, A.; Soyer, P.; Feydy, A.; Barat, M.; Duron, L. Artificial intelligence in diagnostic and interventional radiology: Where are we now? Diagn. Interv. Imaging 2023, 104, 1–5. [Google Scholar] [CrossRef] [PubMed]
  107. Najjar, R. Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging. Diagnostics 2023, 13, 2760. [Google Scholar] [CrossRef]
  108. Waring, J.; Lindvall, C.; Umeton, R. Automated machine learning: Review of the state-of-the-art and opportunities for healthcare. Artif. Intell. Med. 2020, 104, 101822. [Google Scholar] [CrossRef]
  109. Narang, A.; Bae, R.; Hong, H.; Thomas, Y.; Surette, S.; Cadieu, C.; Chaudhry, A.; Martin, R.P.; McCarthy, P.M.; Rubenson, D.S.; et al. Utility of a Deep-Learning Algorithm to Guide Novices to Acquire Echocardiograms for Limited Diagnostic Use. JAMA Cardiol. 2021, 6, 624–632. [Google Scholar] [CrossRef]
  110. Zhang, J.; Xiao, S.; Zhu, Y.; Zhang, Z.; Cao, H.; Xie, M.; Zhang, L. Advances in the Application of Artificial Intelligence in Fetal Echocardiography. J. Am. Soc. Echocardiogr. 2024, 37, 550–561. [Google Scholar] [CrossRef]
  111. Berg, W.A.; Aldrete, A.L.L.; Jairaj, A.; Parea, J.C.L.; García, C.Y.; McClennan, R.C.; Cen, S.Y.; Larsen, L.H.; de Lara, M.T.S.; Love, S. Toward AI-supported US Triage of Women with Palpable Breast Lumps in a Low-Resource Setting. Radiology 2023, 307, e223351. [Google Scholar] [CrossRef]
  112. Esmaeilzadeh, P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif. Intell. Med. 2024, 151, 102861. [Google Scholar] [CrossRef]
  113. Duan, S.; Liu, L.; Chen, Y.; Yang, L.; Zhang, Y.; Wang, S.; Hao, L.; Zhang, L. A 5G-powered robot-assisted teleultrasound diagnostic system in an intensive care unit. Crit. Care 2021, 25, 134. [Google Scholar] [CrossRef]
  114. Lee, H.; Kang, J.; Yeo, J. Medical Specialty Recommendations by an Artificial Intelligence Chatbot on a Smartphone: Development and Deployment. J. Med. Internet Res. 2021, 23, e27460. [Google Scholar] [CrossRef]
  115. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef] [PubMed]
  116. Alami, H.; Rivard, L.; Lehoux, P.; Hoffman, S.J.; Cadeddu, S.B.M.; Savoldelli, M.; Samri, M.A.; Ag Ahmed, M.A.; Fleet, R.; Fortin, J.P. Artificial intelligence in health care: Laying the Foundation for Responsible, sustainable, and inclusive innovation in low- and middle-income countries. Glob. Health 2020, 16, 52. [Google Scholar] [CrossRef]
  117. Pesapane, F.; Volonté, C.; Codari, M.; Sardanelli, F. Artificial intelligence as a medical device in radiology: Ethical and regulatory issues in Europe and the United States. Insights Imaging 2018, 9, 745–753. [Google Scholar] [CrossRef] [PubMed]
  118. Meszaros, J.; Minari, J.; Huys, I. The future regulation of artificial intelligence systems in healthcare services and medical research in the European Union. Front. Genet. 2022, 13, 927721. [Google Scholar] [CrossRef] [PubMed]
  119. Gottardelli, B.; Gatta, R.; Nucciarelli, L.; Tudor, A.M.; Tavazzi, E.; Vallati, M.; Orini, S.; Di Giorgi, N.; Damiani, A. GEN-RWD Sandbox: Bridging the gap between hospital data privacy and external research insights with distributed analytics. BMC Med. Inform. Decis. Mak. 2024, 24, 170. [Google Scholar] [CrossRef]
  120. Cohen, I.G.; Gerke, S.; Kramer, D.B. Ethical and Legal Implications of Remote Monitoring of Medical Devices. Milbank Q. 2020, 98, 1257–1289. [Google Scholar] [CrossRef]
  121. Aggarwal, R.; Sounderajah, V.; Martin, G.; Ting, D.S.W.; Karthikesalingam, A.; King, D.; Ashrafian, H.; Darzi, A. Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. NPJ Digit. Med. 2021, 4, 65. [Google Scholar] [CrossRef]
  122. Miller, D.D.; Brown, E.W. Artificial Intelligence in Medical Practice: The Question to the Answer? Am. J. Med. 2018, 131, 129–133. [Google Scholar] [CrossRef]
  123. Bukowski, M.; Farkas, R.; Beyan, O.; Moll, L.; Hahn, H.; Kiessling, F.; Schmitz-Rode, T. Implementation of eHealth and AI integrated diagnostics with multidisciplinary digitized data: Are we ready from an international perspective? Eur. Radiol. 2020, 30, 5510–5524. [Google Scholar] [CrossRef] [PubMed]
  124. Huisman, M.; Ranschaert, E.; Parker, W.; Mastrodicasa, D.; Koci, M.; de Santos, D.P.; Coppola, F.; Morozov, S.; Zins, M.; Bohyn, C.; et al. An international survey on AI in radiology in 1041 radiologists and radiology residents part 2: Expectations, hurdles to implementation, and education. Eur. Radiol. 2021, 31, 8797–8806. [Google Scholar] [CrossRef] [PubMed]
  125. Kalra, N.; Verma, P.; Verma, S. Advancements in AI based healthcare techniques with FOCUS ON diagnostic techniques. Comput. Biol. Med. 2024, 179, 108917. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The relationship between artificial intelligence, machine learning, deep learning, and big data.
Figure 1. The relationship between artificial intelligence, machine learning, deep learning, and big data.
Bioengineering 12 00288 g001
Figure 2. Flowchart of AI-assisted image diagnosis. This diagram outlines the AI-enhanced ultrasound analysis pipeline: (1) image acquisition and preprocessing (contrast/texture enhancement, GIE denoising), (2) hierarchical feature extraction via Recursive-Scale UNet (RSU) with multi-resolution analysis, (3) model training using gradient descent optimization with directional (ORN) and temporal (RNN) networks, and (4) diagnostic outputs including quantitative measurements and automated reports. Iterative refinement and clinician validation checkpoints ensure robust clinical integration.
Figure 2. Flowchart of AI-assisted image diagnosis. This diagram outlines the AI-enhanced ultrasound analysis pipeline: (1) image acquisition and preprocessing (contrast/texture enhancement, GIE denoising), (2) hierarchical feature extraction via Recursive-Scale UNet (RSU) with multi-resolution analysis, (3) model training using gradient descent optimization with directional (ORN) and temporal (RNN) networks, and (4) diagnostic outputs including quantitative measurements and automated reports. Iterative refinement and clinician validation checkpoints ensure robust clinical integration.
Bioengineering 12 00288 g002
Figure 3. Approaches for spatiotemporal features extraction. A CNN and RNN architecture was used to extract the spatial features in liver ultrasound images.
Figure 3. Approaches for spatiotemporal features extraction. A CNN and RNN architecture was used to extract the spatial features in liver ultrasound images.
Bioengineering 12 00288 g003
Figure 4. Deep learning-assisted diagnosis of breast cancer [34]. Reproduced with permission from Yingying Jia, et al.
Figure 4. Deep learning-assisted diagnosis of breast cancer [34]. Reproduced with permission from Yingying Jia, et al.
Bioengineering 12 00288 g004
Figure 5. AI-assisted and AI-driven applications in educational workflows.
Figure 5. AI-assisted and AI-driven applications in educational workflows.
Bioengineering 12 00288 g005
Figure 6. Flowchart illustrating the integration of AI with emerging technologies in ultrasound medicine.
Figure 6. Flowchart illustrating the integration of AI with emerging technologies in ultrasound medicine.
Bioengineering 12 00288 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, L.; Li, Q.; Fu, K.; Zhou, X.; Zhang, K. Progress in the Application of Artificial Intelligence in Ultrasound-Assisted Medical Diagnosis. Bioengineering 2025, 12, 288. https://doi.org/10.3390/bioengineering12030288

AMA Style

Yan L, Li Q, Fu K, Zhou X, Zhang K. Progress in the Application of Artificial Intelligence in Ultrasound-Assisted Medical Diagnosis. Bioengineering. 2025; 12(3):288. https://doi.org/10.3390/bioengineering12030288

Chicago/Turabian Style

Yan, Li, Qing Li, Kang Fu, Xiaodong Zhou, and Kai Zhang. 2025. "Progress in the Application of Artificial Intelligence in Ultrasound-Assisted Medical Diagnosis" Bioengineering 12, no. 3: 288. https://doi.org/10.3390/bioengineering12030288

APA Style

Yan, L., Li, Q., Fu, K., Zhou, X., & Zhang, K. (2025). Progress in the Application of Artificial Intelligence in Ultrasound-Assisted Medical Diagnosis. Bioengineering, 12(3), 288. https://doi.org/10.3390/bioengineering12030288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop