Next Article in Journal
Audiovestibular Dysfunction in Systemic Lupus Erythematosus Patients: A Systematic Review
Previous Article in Journal
Exploring Publicly Accessible Optical Coherence Tomography Datasets: A Comprehensive Overview
Previous Article in Special Issue
Efficiency of Simulation-Based Learning Using an ABC POCUS Protocol on a High-Fidelity Simulator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence (AI) Applications for Point of Care Ultrasound (POCUS) in Low-Resource Settings: A Scoping Review

1
Department of Informatics, University of California, Irvine, CA 92697, USA
2
Department of Emergency Medicine, Brigham and Women’s Hospital, Boston, MA 02115, USA
3
Department of Emergency Medicine, University of California, Irvine, CA 92697, USA
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(15), 1669; https://doi.org/10.3390/diagnostics14151669
Submission received: 13 July 2024 / Revised: 26 July 2024 / Accepted: 28 July 2024 / Published: 1 August 2024
(This article belongs to the Special Issue Ultrasound: An Important Tool in Critical Care)

Abstract

:
Advancements in artificial intelligence (AI) for point-of-care ultrasound (POCUS) have ushered in new possibilities for medical diagnostics in low-resource settings. This review explores the current landscape of AI applications in POCUS across these environments, analyzing studies sourced from three databases—SCOPUS, PUBMED, and Google Scholars. Initially, 1196 records were identified, of which 1167 articles were excluded after a two-stage screening, leaving 29 unique studies for review. The majority of studies focused on deep learning algorithms to facilitate POCUS operations and interpretation in resource-constrained settings. Various types of low-resource settings were targeted, with a significant emphasis on low- and middle-income countries (LMICs), rural/remote areas, and emergency contexts. Notable limitations identified include challenges in generalizability, dataset availability, regional disparities in research, patient compliance, and ethical considerations. Additionally, the lack of standardization in POCUS devices, protocols, and algorithms emerged as a significant barrier to AI implementation. The diversity of POCUS AI applications in different domains (e.g., lung, hip, heart, etc.) illustrates the challenges of having to tailor to the specific needs of each application. By separating out the analysis by application area, researchers will better understand the distinct impacts and limitations of AI, aligning research and development efforts with the unique characteristics of each clinical condition. Despite these challenges, POCUS AI systems show promise in bridging gaps in healthcare delivery by aiding clinicians in low-resource settings. Future research endeavors should prioritize addressing the gaps identified in this review to enhance the feasibility and effectiveness of POCUS AI applications to improve healthcare outcomes in resource-constrained environments.

1. Introduction

The global diagnostic ultrasound market has seen steady growth, reaching a value of USD 7.39 billion in 2023, with projections expecting it to reach approximately USD 11 billion by 2033 [1,2]. This growth stems from the strengths of ultrasonography being portable, affordable, and radiation-free, unlike computed tomography (CT) [3]. Point-of-care-ultrasound (POCUS) refers to ultrasound performed by the clinician at the bedside of their patient. Despite concerns that its portability might compromise performance, POCUS machines largely retain conventional ultrasound features and perform comparably well [4,5]. POCUS holds immense potential to make medical care more accessible, even in the most austere conditions, owing to its small size, portability, and affordability. This makes it an invaluable tool in places with limited resources. Accordingly, the use of POCUS has been widely adopted in various resource-limited settings, such as developing countries and conflict zones, areas affected by war or political instability that disrupt essential services such as housing, transportation, communication, sanitation, water, and healthcare [6,7,8]. In this review, low-resource setting refers to, but is not limited to, environments in which resources for high-quality healthcare (e.g., finances, trained personnel, medical equipment, computing resources) are constrained [9,10]. Specifically, this review focuses on the following low-resource areas: rural or remote [11], low- and middle-income countries (LMICs) [7,12], emergency contexts [13], and environments lacking key resources [14].
Artificial intelligence (AI) optimizes processes through automation and in-depth analyses surpassing human capability and, thus, has important implications for POCUS used in low-resource settings. As ultrasound machines become more ubiquitous and portable, more clinicians will continue to adopt ultrasound as the preferred diagnostic and/or therapeutic modality. This, however, leaves a potential area and gap in medical training and education. It is within this space that AI presents a unique opportunity to facilitate both image acquisition and image interpretation when technology outstrips human skill levels.
Because technology has evolved so rapidly within the last decade, there have been limited studies on the applications and developments of AI for POCUS, specifically for POCUS used in or developed for low-resource settings. Previous literature mainly focuses on POCUS education and training, aiming to nurture proficient POCUS practitioners or enhance the acceptance and utilization of POCUS in such settings [15,16,17]. Advanced technologies including telehealth applications using POCUS for both diagnosis and remote education have also been proposed but were irrelevant to AI [18,19,20]. Some articles related to AI were either on conventional ultrasound but not POCUS or pertinent to broad and general situations but not particularly to low-resource settings [21,22,23,24,25]. This review aims to accomplish two research objectives: (1) to examine the current state of POCUS AI applications in and for low-resource settings using various levels of analysis, including target population, geography or country, type of low-resource setting, and the objective and implication of the study; and (2) to identify limitations and barriers that those AI systems face to leave them for future studies to address.

2. Materials and Methods

This paper utilized the Cochrane guidelines for conduct and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (PRISMA-ScR) guidelines to minimize bias and provide the review with more structure. The approval of the Institutional Review Board (IRB) was not necessary as the study did not involve human participants.
A comprehensive search was conducted on three electronic databases (SCOPUS, PubMed, and Google Scholars) in June 2024 using the following keywords: ((POCUS) OR (Point-of-care ultrasound) OR (Portable Ultrasound)) AND ((AI) OR (Artificial Intelligence) OR (Machine Learning) OR (Deep Learning) OR (NLP) OR (Natural Language Processing) OR (Large Language Model) OR (LLM) OR (Generative AI)) AND ((low-resource) OR (resource-limited) OR (rural) OR (remote) OR (austere setting) OR (LMIC) OR (Low-middle income countries) OR (military) OR (space) OR ((emergency) AND (low-resource))). A data-charting form was jointly developed by all authors to determine which variables to extract. The actual extraction of metadata was conducted by two authors (SK and SY). Such metadata included authors, population, geography or country, type of low-resource settings, type of AI, and research objectives. We did not impose a time restriction to ensure the search was systematic [26,27]. The records retrieved from these databases were exported to Covidence (Melbourne, Australia) a platform that aids scholars with literature reviews [28]. After duplicates were eliminated, the records went through two stages of screening.
During the first stage, the title, abstract, and type of study were examined and a total of 548 were excluded. A more specific breakdown is available in Figure 1. This stage was intended to filter out the articles meeting the exclusion criteria and deemed ineligible based on the title and abstract. More specifically, articles covering non-ultrasound applications, topics irrelevant to low-resource settings and AI, manuscripts that were not peer-reviewed, non-journal pieces (e.g., books), non-English articles, reviews, and any documents generated by non-humans (e.g., ChatGPT) were not included.
During the second stage, records went through a full-text review. The same exclusion criteria used in the first stage of screening were equally applied but this time on a full-text basis. In addition to the studies that used or tested AI applications in low-resource settings, manuscripts that explicitly alluded to the potential benefits and usefulness of the proposed AI applications in low-resource settings were also included in our scoping review. Both stages of screening were conducted based on the inclusion and exclusion criteria in Table 1. All authors were involved in both stages of the screening process. Conflict resolution when disagreements arose was conducted jointly by all six authors. Determination of whether each of the articles extracted was relevant and maintained high enough quality was based on sufficient discussions among all authors. The protocol used in this review was not preregistered.

3. Results

3.1. Results

A total of 1196 records were retrieved, 37 duplicates were removed, and 918 records were removed after the initial screening. The remaining 241 records underwent full-text reviews according to the inclusion and exclusion criteria detailed in Table 1, resulting in 29 unique studies. Figure 1 displays the PRISMA flow diagram, which visualizes this screening process. Table 2 showcases the metadata of the 29 studies included in this review. The majority of studies (79%) were conducted from 2021 to 2023. The most frequently addressed medical departments were pulmonology (31%), obstetrics (21%), emergency medicine or intensive care units (ICU) (14%), and cardiology (14%). Deep learning was the most commonly used AI technique, employed in 23 studies (79%) to enhance the operation of POCUS in resource-limited settings. Other AI techniques utilized were machine learning, computer vision, and Bayesian machine learning.

3.2. Types of Populations and Locations

Almost half of the studies (45%) did not specify target populations. The population column in Table 2 is labeled “N/A” for these studies. Instead of focusing on particular populations, these studies proposed and assessed high-level AI algorithms or architecture that can automatically measure medical entities (e.g., bladder volume), assist in diagnosing or classifying conditions, and improve quality assurance of operations related to POCUS in low-resource environments. Examples of such health measurements included left ventricular ejection fractions and bladder volume [31,46]. Conditions for automatic POCUS image-based diagnosis varied from pneumothorax to COVID-19 and breast cancer [33,37,44,49]. Other populations covered in the remaining studies included infants or neonates (14%), pregnant women (17%), and COVID-19 patients (17%). In regards to the location of research, the United States was where most of research studies (45%) were conducted followed by Canada (21%). Other countries included Vietnam, India, Zambia, South Korea, Egypt, Norway, and Ethiopia.

3.3. Types of Low-Resource Settings

Four broad categories were identified regarding the types of low-resource settings: LMIC, rural or remote, emergency, and lack of key resources. Seven studies (24%) pertained to rural and remote settings. Two studies (7%) focused on emergency situations. Ten studies (34%) targeted LMICs.
A total of 18 studies (62%) aimed to address limitations due to the scarcity of key resources. Key resources included experienced personnel, computing resources, and data for training AI models. Cho et al. developed a deep learning-based system to measure bladder volume from POCUS images. This system was designed to operate on devices with limited computing power, which is typical in LMICs and rural areas [31]. This may aid clinicians in assessing bladder volume even in low-resource settings with limited access to complex equipment.
Baloescu et al. addressed the lack of experienced staff with sonography experience needed to assess B-lines in point-of-care lung ultrasound, which is crucial for diagnosing shortness of breath in the emergency department (ED) [44]. The study developed and evaluated a deep convolutional neural network-based deep learning algorithm that quantified the assessment of B-lines in lung ultrasound by utilizing 400 ultrasound clips from an existing database of ED patients. The model achieved a decent performance of 93% sensitivity and 96% specificity in identifying B-lines compared with expert evaluations, suggesting that the system could empower inexperienced personnel in low-resource hospitals to perform B-line identification and quantification, which may be challenging for novice users.
To address the lack of data for training AI systems for POCUS-related tasks, Blaivas et al. presented a new method of using unrelated ultrasound window data (only apical 4-chamber views) to train a POCUS machine learning algorithm to measure the left ventricular ejection fraction. This approach is expected to guide the development of future POCUS and deep learning algorithms to mitigate the data paucity common in LMICs.

4. Discussion

This review aims to understand the current landscape of AI applications for POCUS in low-resource settings. It seeks to identify gaps in these AI applications in order to inform future research and, ultimately, benefit both the clinicians and the patients in resource-constrained environments.
A major gap identified in the studies included in this review was the potential inability of AI systems to generalize to other health conditions, populations, or settings. With ongoing training and adjustments, the generalizability of ultrasound AI models is expected to improve. Many of the articles reviewed were based on pilot studies. Consequently, the experiments, conducted under restricted conditions, may not fully account for all variables in real-world scenarios. Nhat et al. presented an AI-enabled point-of-care lung ultrasound (LUS) solution that assists non-expert clinicians in LMIC intensive care units (ICU) with LUS interpretation [29]. The AI system, however, was only trained on data from patients with severe dengue or sepsis. Future studies, therefore, are needed to investigate whether this AI solution is equally helpful in interpreting point-of-care LUS images for other diseases. Libon et al. sought to assess the feasibility of implementing a US FDA-cleared AI screening device for developmental dysplasia of the hip (DDH) for infants ages 6 to 10 weeks [30]. This pilot study was limited in scale, involving 306 infants from a suburban Western Canadian area with a substantial Indigenous population. Researchers may want to initiate a separate study in the future that employs a greater number of infants with more racial and geographical diversity.
Furthermore, the performance of some algorithms proposed in the studies may diminish with more complex datasets. For example, Aujila et al. developed a machine learning framework to automatically diagnose neonatal lung pathologies in low-resource and, particularly, remote settings [34]. Linear discriminant analysis (LDA) was used as the main classifier algorithm, but for larger datasets, this linear classifier may not be the most appropriate. Therefore, deep learning-based classifiers that can capture more convoluted patterns may prove beneficial. Nevertheless, this simple linear classifier was selected over the more complex classifiers in this study to extract and interpret meaningful features relevant to clinical markers and keep the outcomes conservative and realistic. The trade-off between the interpretability and complexity of AI systems should be a key consideration for future research on this topic.
Regional disparities in research activities on the applications of AI for POCUS in low-resource settings may be concerning. Only 30% of the studies included in this review were conducted in LMICs. Even when some AI application is designed for low-resource settings, bringing it to resource-limited settings for testing and assessment is crucial for ensuring its usefulness in such settings. The concentration of studies in the U.S. and Canada suggests a need for increased research investment and collaboration in LMICs and other underserved regions to ensure that the benefits of AI applications for POCUS are globally accessible. Pokaprakarn et al. and Viswanathan et al. may serve as exemplary models to address this issue of regional disparities [52,54]. Researchers from both studies were based in the U.S. but proceeded with their testing and evaluation of the developed AI systems in not just the U.S. but also in Zambia.
Patient compliance and research ethics may be notably critical issues in studies conducted in remote settings. These challenges may arise because researchers and patients are not co-located, which complicates supervision, interaction, and rapport building. Sultan et al. performed a pilot analysis to evaluate the performance of AI-powered COVID-19 detection systems based on point-of-care lung ultrasound images [32]. This study primarily focused on inexperienced users, who comprise most of the workforce in low-resource settings. The study anticipates that patient compliance within the remotely monitored subgroup will be a significant limitation. Expected barriers to compliance include reluctance to self-administer daily POCUS due to discomfort, fear of inadequate care, and misunderstandings of the study protocols. Ensuring the security of ultrasound imaging data and other health records to protect patient privacy and confidentiality must be prioritized in future, larger-scale studies.
Future research must tackle the challenge of standardizing POCUS devices, protocols, and algorithms. Four popular handheld POCUS devices are currently available on the market: Butterfly iQ+ by Butterfly Network Inc. (Burlington, MA, USA), Kosmos by EchoNous (Redmond, WA, USA), Vscan Air by General Electric (Boston, MA, USA), and Lumify by Philips Healthcare (Andover, MA, USA). All of these devices have different functionalities and views with no single handheld ultrasound device perceived to have all the desired characteristics [58]. In one study evaluating the performance of deep learning algorithms on 21 videos obtained from each of the two novel POCUS machines, performance was significantly worse than the performance from a common POCUS machine in widespread use [59]. Lack of algorithm standardization also leads to degrading model performance. Blaivas et al. developed a “do-it-yourself” (DIY) deep learning algorithm for classifying POCUS images (pelvis, heart, lung, abdomen, musculoskeletal, ocular, and central vascular access) to enhance the quality assurance workflow for POCUS programs [43]. This algorithm, which processed ultrasound images from various POCUS programs, exhibited high-performance variability across different systems. This implied that the aforementoned algorithm would require further training on new image data samples when used in different POCUS programs. This algorithm has difficulty with classifying musculoskeletal ultrasound images, for instance, while performing well in other domains. Standardizing devices, protocols, and algorithms is crucial in resource-limited settings with limited options. A standardized all-in-one solution may be a better alternative.
The diversity of POCUS AI applications across different domains, including lung, hip, and bladder, illustrates the challenges of tailoring solutions to meet the specific needs of each application. For instance, the ability of AI to enhance diagnostic precision through the quantitative measurement of DDH in infants showcases the direct and reproducible benefits of AI in well-defined clinical measures in hip dysplasia screening, as demonstrated by Libon et al. [30]. Similarly, bladder volume estimation using AI in low-resource settings exemplifies the potential for AI to provide significant operational efficiencies in routine diagnostics [31]. Conversely, lung ultrasound applications, such as those explored by Nhat et al. LUS in intensive care, present greater challenges due to the qualitative nature of assessments and the subtlety of visual cues, which impact the reproducibility and consistency of AI predictions [29]. These examples underscore the necessity for AI systems that are specifically adapted to the complexities of each medical imaging domain, ensuring that AI tools augment clinical workflows effectively without leading to misinterpretation or overreliance. By analyzing the impact separately by application area, researchers will better understand the distinct impacts and limitations of AI, aligning research and development efforts with the unique characteristics of each clinical condition.
This review is not without limitations. The protocol was not preregistered as mentioned in the Methods section. Preregistration of the review protocol will be desirable for similar future studies to ensure further rigor and consistency in the protocol. Furthermore, readers of this review may encounter difficulties in applying the insights drawn from this review due to the broad scope of applications covered in this review. Future research may warrant focusing on applications for specific departments (e.g., cardiology) so that the role of AI systems for POCUS may be robustly validated, at least for that particular department or domain of application.

5. Conclusions

This review examined the current state of AI in POCUS, employing filters such as medical departments, countries, research geographies, AI types, and low-resource settings. The limitations of various POCUS AI applications, implemented and evaluated in low-resource settings, were extensively analyzed. Identified limitations include limited generalizability, insufficient datasets for training AI systems, regional disparities in research on AI applications for POCUS, potential patient noncompliance, ethical challenges in remote settings, and a lack of standardized POCUS protocols, algorithms, and devices. Despite these challenges, the findings demonstrate that POCUS AI systems are both feasible and effective in aiding patients and clinicians to overcome barriers such as scarce computing resources and a lack of trained personnel in low-resource settings. Future research should focus on developing new POCUS AI applications that both address the gaps identified in this review and prove cost-effective, using fewer computational resources without sacrificing performance. Lastly, if new POCUS AI applications could become more user-friendly, this would effectively empower the most inexperienced users in low-resource settings to perform point-of-care ultrasound with high fidelity.

Author Contributions

All authors were equally involved in every step of the study from conceptualization, methodology to discussion and writing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Institute of Allergy and Infectious Diseases (NIAID), National Institute on Drug Abuse (NIDA), National Center for Complementary and Integrative Health (NCCIH), and National Institute for Minority Health and Health Disparities (NIMHD). The funders played no role in the design or views on this work.

Institutional Review Board Statement

Ethical review and approval were not required for this study as the study did not involve any human subjects.

Conflicts of Interest

S.D.Y. is an advisor to digital health startups and a board member for the Health and Medicine Division of the National Academy of Sciences, Engineering, and Medicine. C.F. is an advisor to several digital health startups, consultant for Philips Ultrasound, and former employee of Centaur Labs. E.H. is a Butterfly Ultrasound Ambassador and Butterfly Ultrasound Annotator. E.H. is also working as an advisor for Level Ex.

References

  1. Kim, D.-M.; Park, S.-K.; Park, S.-G. A Study on the Performance Evaluation Criteria and Methods of Abdominal Ultrasound Devices Based on International Standards. Safety 2021, 7, 31. [Google Scholar] [CrossRef]
  2. Diagnostic Ultrasound Market Size to Hit USD 11 Bn by 2033. Available online: https://www.precedenceresearch.com/diagnostic-ultrasound-market (accessed on 27 March 2024).
  3. Stewart, K.A.; Navarro, S.M.; Kambala, S.; Tan, G.; Poondla, R.; Lederman, S.; Barbour, K.; Lavy, C. Trends in Ultrasound Use in Low and Middle Income Countries: A Systematic Review. Int. J. Matern. Child Health AIDS (IJMA) 2019, 9, 103–120. [Google Scholar] [CrossRef] [PubMed]
  4. Yoshida, T.; Noma, H.; Nomura, T.; Suzuki, A.; Mihara, T. Diagnostic accuracy of point-of-care ultrasound for shock: A systematic review and meta-analysis. Crit. Care 2023, 27, 200. [Google Scholar] [CrossRef] [PubMed]
  5. Lee, L.; DeCara, J.M. Point-of-Care Ultrasound. Curr. Cardiol. Rep. 2020, 22, 149. [Google Scholar] [CrossRef]
  6. Jhagru, R.; Singh, R.; Rupp, J. Evaluation of an emergency medicine point-of-care ultrasound curriculum adapted for a resource-limited setting in Guyana. Int. J. Emerg. Med. 2023, 16, 57. [Google Scholar] [CrossRef]
  7. Ganchi, F.A.; Hardcastle, T.C. Role of Point-of-Care Diagnostics in Lower- and Middle-Income Countries and Austere Environments. Diagnostics 2023, 13, 1941. [Google Scholar] [CrossRef] [PubMed]
  8. Dana, E.; Nour, A.M.; Kpa’hanba, G.A.; Khan, J.S. Point-of-Care Ultrasound (PoCUS) and Its Potential to Advance Patient Care in Low-Resource Settings and Conflict Zones. Disaster Med. Public Health Prep. 2023, 17, e417. [Google Scholar] [CrossRef] [PubMed]
  9. Fritz, F.; Tilahun, B.; Dugas, M. Success criteria for electronic medical record implementations in low-resource settings: A systematic review. J. Am. Med. Inform. Assoc. 2015, 22, 479–488. [Google Scholar] [CrossRef] [PubMed]
  10. Venkatayogi, N.; Gupta, M.; Gupta, A.; Nallaparaju, S.; Cheemalamarri, N.; Gilari, K.; Pathak, S.; Vishwanath, K.; Soney, C.; Bhattacharya, T.; et al. From Seeing to Knowing with Artificial Intelligence: A Scoping Review of Point-of-Care Ultrasound in Low-Resource Settings. Appl. Sci. 2023, 13, 8427. [Google Scholar] [CrossRef]
  11. Wanjiku, G.W.; Bell, G.; Wachira, B. Assessing a novel point-of-care ultrasound training program for rural healthcare providers in Kenya. BMC Health Serv. Res. 2018, 18, 607. [Google Scholar] [CrossRef]
  12. Reynolds, T.A.; Amato, S.; Kulola, I.; Chen, C.-J.J.; Mfinanga, J.; Sawe, H.R. Impact of point-of-care ultrasound on clinical decision-making at an urban emergency department in Tanzania. PLoS ONE 2018, 13, e0194774. [Google Scholar] [CrossRef]
  13. Burleson, S.L.; Swanson, J.F.; Shufflebarger, E.F.; Wallace, D.W.; Heimann, M.A.; Crosby, J.C.; Pigott, D.C.; Gullett, J.P.; Thompson, M.A.; Greene, C.J. Evaluation of a novel handheld point-of-care ultrasound device in an African emergency department. Ultrasound J. 2020, 12, 53. [Google Scholar] [CrossRef]
  14. Valderrama, C.E.; Marzbanrad, F.; Stroux, L.; Martinez, B.; Hall-Clifford, R.; Liu, C.; Katebi, N.; Rohloff, P.; Clifford, G.D. Improving the Quality of Point of Care Diagnostics with Real-Time Machine Learning in Low Literacy LMIC Settings. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies, COMPASS ’18, San Jose, CA, USA, 20–22 June 2018; Association for Computing Machinery: New York, NY, USA, 2018; Volume 20, pp. 1–11. [Google Scholar] [CrossRef]
  15. Dreyfuss, A.; Martin, D.A.; Farro, A.; Inga, R.; Enriquez, S.; Mantuani, D.; Nagdev, A. A Novel Multimodal Approach to Point-of-Care Ultrasound Education in Low-Resource Settings. West. J. Emerg. Med. 2020, 21, 1017–1021. [Google Scholar] [CrossRef]
  16. Vinayak, S.; Temmerman, M.; Villeirs, G.; Brownie, S.M. A Curriculum Model for Multidisciplinary Training of Midwife Sonographers in a Low Resource Setting. J. Multidiscip. Healthc. 2021, 14, 2833–2844. [Google Scholar] [CrossRef] [PubMed]
  17. Maw, A.M.; Galvin, B.; Henri, R.; Yao, M.; Exame, B.; Fleshner, M.; Fort, M.P.; Morris, M.A. Stakeholder Perceptions of Point-of-Care Ultrasound Implementation in Resource-Limited Settings. Diagnostics 2019, 9, 153. [Google Scholar] [CrossRef]
  18. Dougherty, A.; Kasten, M.; DeSarno, M.; Badger, G.; Streeter, M.; Jones, D.C.; Sussman, B.; DeStigter, K. Validation of a Telemedicine Quality Assurance Method for Point-of-Care Obstetric Ultrasound Used in Low-Resource Settings. J. Ultrasound Med. 2021, 40, 529–540. [Google Scholar] [CrossRef]
  19. Chen, J.; Dobron, A.; Esterson, A.; Fuchs, L.; Glassberg, E.; Hoppenstein, D.; Kalandarev-Wilson, R.; Netzer, I.; Nissan, M.; Ovsiovich, R.S.; et al. A randomized, controlled, blinded evaluation of augmenting point-of-care ultrasound and remote telementored ultrasound in inexperienced operators. Isr. Med. Assoc. J. 2022, 24, 596–601. [Google Scholar] [PubMed]
  20. Dreizler, L.; Wanjiku, G.W. Tele-ECHO for Point-of-Care Ultrasound in Rural Kenya: A Feasibility Study. Rhode Island Med. J. 2019, 102, 28–31. [Google Scholar]
  21. Kaur, P.; Mack, A.A.; Patel, N.; Pal, A.; Singh, R.; Michaud, A.; Mulflur, M. Unlocking the Potential of Artificial Intelligence (AI) for Healthcare. In Artificial Intelligence in Medicine and Surgery—An Exploration of Current Trends, Potential Opportunities, and Evolving Threats; IntechOpen: London, UK, 2023; Volume 1. [Google Scholar] [CrossRef]
  22. Drukker, L.; Noble, J.A.; Papageorghiou, A.T. Introduction to artificial intelligence in ultrasound imaging in obstetrics and gynecology. Ultrasound Obstet. Gynecol. 2020, 56, 498–505. [Google Scholar] [CrossRef]
  23. Brattain, L.J.; Pierce, T.T.; Gjesteby, L.A.; Johnson, M.R.; DeLosa, N.D.; Werblin, J.S.; Gupta, J.F.; Ozturk, A.; Wang, X.; Li, Q.; et al. AI-Enabled, Ultrasound-Guided Handheld Robotic Device for Femoral Vascular Access. Biosensors 2021, 11, 522. [Google Scholar] [CrossRef]
  24. Wu, G.-G.; Zhou, L.-Q.; Xu, J.-W.; Wang, J.-Y.; Wei, Q.; Deng, Y.-B.; Cui, X.-W.; Dietrich, C.F. Artificial intelligence in breast ultrasound. World J. Radiol. 2019, 11, 19–26. [Google Scholar] [CrossRef] [PubMed]
  25. Hareendranathan, A.R.; Chahal, B.; Ghasseminia, S.; Zonoobi, D.; Jaremko, J.L. Impact of scan quality on AI assessment of hip dysplasia ultrasound. J. Ultrasound 2021, 25, 145–153. [Google Scholar] [CrossRef] [PubMed]
  26. Shaddock, L.; Smith, T. Potential for use of portable ultrasound devices in rural and remote settings in australia and other developed countries: A systematic review. J. Multidiscip. Health 2022, 15, 605–625. [Google Scholar] [CrossRef] [PubMed]
  27. Becker, D.M.; Tafoya, C.A.; Becker, S.L.; Kruger, G.H.; Tafoya, M.J.; Becker, T.K. The use of portable ultrasound devices in low- and middle-income countries: A systematic review of the literature. Trop. Med. Int. Health 2015, 21, 294–311. [Google Scholar] [CrossRef] [PubMed]
  28. Covidence—Better Systematic Review Management. Covidence. Available online: https://www.covidence.org/ (accessed on 27 March 2024).
  29. Nhat, P.T.H.; Van Hao, N.; Tho, P.V.; Kerdegari, H.; Pisani, L.; Thu, L.N.M.; Phuong, L.T.; Duong, H.T.H.; Thuy, D.B.; McBride, A.; et al. Clinical benefit of AI-assisted lung ultrasound in a resource-limited intensive care unit. Crit. Care 2023, 27, 257. [Google Scholar] [CrossRef] [PubMed]
  30. Libon, J.; Ng, C.; Bailey, A.; Hareendranathan, A.; Joseph, R.; Dulai, S. Remote diagnostic imaging using artificial intelligence for diagnosing hip dysplasia in infants: Results from a mixed-methods feasibility pilot study. Paediatr. Child Health 2023, 28, 285–290. [Google Scholar] [CrossRef] [PubMed]
  31. Cho, H.; Song, I.; Jang, J.; Yoo, Y. A Lightweight Deep Learning Network on a System-on-Chip for Wearable Ultrasound Bladder Volume Measurement Systems: Preliminary Study. Bioengineering 2023, 10, 525. [Google Scholar] [CrossRef] [PubMed]
  32. Sultan, L.R.; Haertter, A.; Al-Hasani, M.; Demiris, G.; Cary, T.W.; Tung-Chen, Y.; Sehgal, C.M. Can Artificial Intelligence Aid Diagnosis by Teleguided Point-of-Care Ultrasound? A Pilot Study for Evaluating a Novel Computer Algorithm for COVID-19 Diagnosis Using Lung Ultrasound. AI 2023, 4, 875–887. [Google Scholar] [CrossRef] [PubMed]
  33. Perera, S.; Adhikari, S.; Yilmaz, A. Pocformer: A Lightweight Transformer Architecture for Detection of COVID-19 Using Point of Care Ultrasound. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 195–199. [Google Scholar]
  34. Aujla, S.; Mohamed, A.; Tan, R.; Magtibay, K.; Tan, R.; Gao, L.; Khan, N.; Umapathy, K. Classification of lung pathologies in neonates using dual-tree complex wavelet transform. Biomed. Eng. Online 2023, 22, 115. [Google Scholar] [CrossRef]
  35. Abdel-Basset, M.; Hawash, H.; Alnowibet, K.A.; Mohamed, A.W.; Sallam, K.M. Interpretable Deep Learning for Discriminating Pneumonia from Lung Ultrasounds. Mathematics 2022, 10, 4153. [Google Scholar] [CrossRef]
  36. Jana, B.; Biswas, R.; Nath, P.K.; Saha, G.; Banerjee, S. Smartphone Based Point-of-Care System Using Continuous Wave Portable Doppler. IEEE Trans. Instrum. Meas. 2020, 69, 8352–8361. [Google Scholar] [CrossRef]
  37. Hannan, D.; Nesbit, S.C.; Wen, X.; Smith, G.; Zhang, Q.; Goffi, A.; Chan, V.; Morris, M.J.; Hunninghake, J.C.; Villalobos, N.E.; et al. MobilePTX: Sparse Coding for Pneumothorax Detection Given Limited Training Examples. Proc. AAAI Conf. Artif. Intell. 2023, 37, 15675–15681. [Google Scholar] [CrossRef]
  38. Ekambaram, K.; Hassan, K. Establishing a Novel Diagnostic Framework Using Handheld Point-of-Care Focused-Echocardiography (HoPE) for Acute Left-Sided Cardiac Valve Emergencies: A Bayesian Approach for Emergency Physicians in Resource-Limited Settings. Diagnostics 2023, 13, 2581. [Google Scholar] [CrossRef] [PubMed]
  39. Khan, N.H.; Tegnander, E.; Dreier, J.M.; Eik-Nes, S.; Torp, H.; Kiss, G. Automatic measurement of the fetal abdominal section on a portable ultrasound machine for use in low and middle income countries. In Proceedings of the 2016 IEEE International Ultrasonics Symposium (IUS), Tours, France, 18–21 September 2016; pp. 1–4. [Google Scholar] [CrossRef]
  40. van den Heuvel, T.L.A.; Petros, H.; Santini, S.; de Korte, C.L.; van Ginneken, B. Automated Fetal Head Detection and Circumference Estimation from Free-Hand Ultrasound Sweeps Using Deep Learning in Resource-Limited Countries. Ultrasound Med. Biol. 2019, 45, 773–785. [Google Scholar] [CrossRef] [PubMed]
  41. Jafari, M.; Girgis, H.; Van Woudenberg, N.; Liao, Z.; Rohling, R.; Gin, K.; Abolmaesumi, P.; Tsang, T. Automatic Biplane Left Ventricular Ejection Fraction Estimation with Mobile Point-of-Care Ultrasound Using Multi-Task Learning and Adversarial Training. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1027–1037. [Google Scholar] [CrossRef]
  42. Al-Zogbi, L.; Singh, V.; Teixeira, B.; Ahuja, A.; Bagherzadeh, P.S.; Kapoor, A.; Saeidi, H.; Fleiter, T.; Krieger, A. Autonomous Robotic Point-of-Care Ultrasound Imaging for Monitoring of COVID-19-Induced Pulmonary Diseases. Front. Robot. AI 2021, 8, 645756. [Google Scholar] [CrossRef]
  43. Blaivas, M.; Arntfield, R.; White, M. DIY AI, deep learning network development for automated image classification in a point-of-care ultrasound quality assurance program. J. Am. Coll. Emerg. Physicians Open 2020, 1, 124–131. [Google Scholar] [CrossRef]
  44. Baloescu, C.; Toporek, G.; Kim, S.; McNamara, K.; Liu, R.; Shaw, M.M.; McNamara, R.L.; Raju, B.I.; Moore, C.L. Automated Lung Ultrasound B-line Assessment Using a Deep Learning Algorithm. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2020, 67, 2312–2320. [Google Scholar] [CrossRef] [PubMed]
  45. Cheema, B.S.; Walter, J.; Narang, A.; Thomas, J.D. Artificial Intelligence–Enabled POCUS in the COVID-19 ICU. JACC Case Rep. 2021, 3, 258–263. [Google Scholar] [CrossRef]
  46. Blaivas, M.; Blaivas, L.N.; Campbell, K.; Thomas, J.; Shah, S.; Yadav, K.; Liu, Y.T. Making Artificial Intelligence Lemonade Out of Data Lemons. J. Ultrasound Med. 2021, 41, 2059–2069. [Google Scholar] [CrossRef]
  47. Cho, H.; Kim, D.; Chang, S.; Kang, J.; Yoo, Y. A system-on-chip solution for deep learning-based automatic fetal biometric measurement. Expert Syst. Appl. 2024, 237, 121482. [Google Scholar] [CrossRef]
  48. Zemi, N.Z.; Bunnell, A.; Valdez, D.; Shepherd, J.A. Assessing the feasibility of AI-enhanced portable ultrasound for improved early detection of breast cancer in remote areas. In Proceedings of the 17th International Workshop on Breast Imaging (IWBI), Chicago, IL, USA, 9–12 June 2024; Volume 13174, pp. 88–94. [Google Scholar] [CrossRef]
  49. Karlsson, J.; Arvidsson, I.; Sahlin, F.; Åström, K.; Overgaard, N.C.; Lång, K.; Heyden, A. Classification of point-of-care ultrasound in breast imaging using deep learning. In Proceedings of the Medical Imaging 2023: Computer-Aided Diagnosis, San Diego, CA, USA, 19–23 February 2023; Volume 12465, pp. 191–199. [Google Scholar] [CrossRef]
  50. MacLean, A.; Abbasi, S.; Ebadi, A.; Zhao, A.; Pavlova, M.; Gunraj, H.; Xi, P.; Kohli, S.; Wong, A. COVID-Net US: A Tailored, Highly Efficient, Self-Attention Deep Convolutional Neural Network Design for Detection of COVID-19 Patient Cases from Point-of-Care Ultrasound Imaging. In Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health; Springer: Cham, Switzerland, 2021; pp. 191–202. [Google Scholar] [CrossRef]
  51. Adedigba, A.P.; Adeshina, S.A. Deep Learning-based Classification of COVID-19 Lung Ultrasound for Tele-operative Robot-assisted diagnosis. In Proceedings of the 2021 1st International Conference on Multidisciplinary Engineering and Applied Science (ICMEAS), Abuja, Nigeria, 15–16 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
  52. Pokaprakarn, T.; Prieto, J.C.; Price, J.T.; Kasaro, M.P.; Sindano, N.; Shah, H.R.; Peterson, M.; Akapelwa, M.M.; Kapilya, F.M.; Sebastião, Y.V.; et al. AI Estimation of Gestational Age from Blind Ultrasound Sweeps in Low-Resource Settings. NEJM Evid. 2022, 1, EVIDoa2100058. [Google Scholar] [CrossRef]
  53. Karnes, M.; Perera, S.; Adhikari, S.; Yilmaz, A. Adaptive Few-Shot Learning PoC Ultrasound COVID-19 Diagnostic System. In Proceedings of the IEEE Biomedical Circuits and Systems Conference (BioCAS), Berlin, Germany, 7–9 October 2021; pp. 1–6. [Google Scholar] [CrossRef]
  54. Viswanathan, A.V.; Pokaprakarn, T.; Kasaro, M.P.; Shah, H.R.; Prieto, J.C.; Benabdelkader, C.; Sebastião, Y.V.; Sindano, N.; Stringer, E.; Stringer, J.S.A. Deep learning to estimate gestational age from fly-to cineloop videos: A novel approach to ultrasound quality control. Int. J. Gynecol. Obstet. 2024, 165, 1013–1021. [Google Scholar] [CrossRef] [PubMed]
  55. Zeng, E.Z.; Ebadi, A.; Florea, A.; Wong, A. COVID-Net L2C-ULTRA: An Explainable Linear-Convex Ultrasound Augmentation Learning Framework to Improve COVID-19 Assessment and Monitoring. Sensors 2024, 24, 1664. [Google Scholar] [CrossRef]
  56. Abhyankar, G.; Raman, R. Decision Tree Analysis for Point-of-Care Ultrasound Imaging: Precision in Constrained Healthcare Settings. In Proceedings of the 2024 International Conference on Inventive Computation Technologies (ICICT), Lalitpur, Nepal, 24–26 April 2024; pp. 782–787. [Google Scholar] [CrossRef]
  57. Madhu, G.; Kautish, S.; Gupta, Y.; Nagachandrika, G.; Biju, S.M.; Kumar, M. XCovNet: An Optimized Xception Convolutional Neural Network for Classification of COVID-19 from Point-of-Care Lung Ultrasound Images. Multimed. Tools Appl. 2023, 83, 33653–33674. [Google Scholar] [CrossRef]
  58. Le, M.-P.T.; Voigt, L.; Nathanson, R.; Maw, A.M.; Johnson, G.; Dancel, R.; Mathews, B.; Moreira, A.; Sauthoff, H.; Gelabert, C.; et al. Comparison of four handheld point-of-care ultrasound devices by expert users. Ultrasound J. 2022, 14, 27. [Google Scholar] [CrossRef]
  59. Blaivas, M.; Blaivas, L.N.B.; Tsung, J.W. Deep Learning Pitfall: Impact of Novel Ultrasound Equipment Introduction on Algorithm Performance and the Realities of Domain Adaptation. J. Ultrasound Med. 2022, 41, 855–863. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram. * not peer-reviewed or non-journals (n = 143); reviews (n = 197); not POCUS-related or irrelevant to ultrasound (n = 214); not low-resource setting (n = 256); not AI-related (n = 108). ** not POCUS-related or irrelevant to ultrasound (n = 52); not low-resource setting (n = 143); not AI-related (n = 17).
Figure 1. PRISMA flow diagram. * not peer-reviewed or non-journals (n = 143); reviews (n = 197); not POCUS-related or irrelevant to ultrasound (n = 214); not low-resource setting (n = 256); not AI-related (n = 108). ** not POCUS-related or irrelevant to ultrasound (n = 52); not low-resource setting (n = 143); not AI-related (n = 17).
Diagnostics 14 01669 g001
Table 1. Inclusion and exclusion criteria for screening.
Table 1. Inclusion and exclusion criteria for screening.
Inclusion CriteriaExclusion Criteria
-
Point-of-care ultrasound used
-
Low-resource setting
-
artificial intelligence (AI)
-
application
-
Peer-reviewed
-
Not reviews
-
Irrelevant to ultrasound or only related to traditional ultrasound
-
Not low-resource setting
-
Not English-speaking
-
Not human
-
Not artificial intelligence (AI)-related
-
Not peer-reviewed
-
Not journals (e.g., books)
-
Reviews (e.g., systematic reviews)
Table 2. Metadata of studies included in the review.
Table 2. Metadata of studies included in the review.
AuthorPopulationGeography/
Country
Low-Resource Setting Type/
Department
AI UsedObjective
Nhat et al., 2023 [29]Doctors, cliniciansVietnamLMIC/Intensive care unit (ICU)Deep learningDevelop an AI solution that assists lung ultrasound (LUS) practitioners, especially with LUS interpretation, and assess its usefulness in a low-resource ICU.
Libon et al., 2023 [30]InfantsCanadaRemote/PediatricsUS FDA-cleared artificial intelligence (AI) screening device for infant hip dysplasia (DDH)Evaluate the feasibility of implementing an artificial intelligence-enhanced portable ultrasound tool for infant hip dysplasia (DDH) screening in primary care by determining its effectiveness in practice and evaluating patient and provider feedback.
Cho et al., 2023 [31]N/ASouth KoreaLack of computing resources/UrologyDeep learningDevelop a system for measuring bladder volume in ultrasound images that could be used in point-of-care settings. Create a system based on deep learning optimized for low-resource system-on-chip (SoC) due to its speed and accuracy, even on devices with limited computing power. This could improve bladder disorder diagnosis by making bladder volume assessment easier in situations when access to complex equipment is limited.
Sultan et al., 2023 [32]Clinicians, patientsUnited StatesRemote/PulmonologyDeep learningPropose the use of teleguided POCUS supported by AI technologies for monitoring COVID-19 patients by non-experienced personnel, including self-monitoring by the patients themselves in a remote setting.
Perera et al., 2021 [33]N/AUnited StatesRural and LMIC/PulmonologyDeep learningPresent an image-based solution that automatically tests for COVID-19. This will allow for rapid mass testing to be conducted with or without a trained medical professional, which can be applied to rural environments and third-world countries.
Aujla et al., 2023 [34]N/ACanadaRemote and LMIC/Pulmonology and
neonatology
Machine learningPropose an automated point-of-care tool for classifying and interpreting neonatal lung ultrasound (LUS) images, which will be useful in remote or developing countries with a lack of well-trained clinicians.
Abdel-Basset et al., 2022 [35]N/AEgyptLack of computing resources/PulmonologyDeep learningPresent a novel, lightweight, and interpretable deep learning framework that discriminates COVID-19 infection from other cases of pneumonia and normal cases suitable for deployment in point-of-care and/or resource-constrained settings.
Jana et al., 2020 [36]PatientsIndiaLack of computing resources/CardiologyMachine learningDevelop a smartphone-based portable continuous-wave Doppler ultrasound system for diagnosis of peripheral arterial diseases based on the hemodynamic features in a way that is more cost-effective and power-efficient, making it suitable for low-resource settings with limited energy and computing resources.
Hannan et al., 2023 [37]N/AUnited StatesEmergency/Emergency
medicine
Deep learningDevelop a deep learning-driven classifier that can aid medical professionals in diagnosing whether a patient has pneumothorax based on POCUS images. Design the classifier to perform in a mobile phone using little training data to train the model, making it suitable for low-resource settings such as emergency and acute-care settings.
Ekambaram and Hassan, 2023 [38]PatientsSouth AfricaLMIC and rural/Emergency
medicine
Bayesian machine learningPropose a novel, Bayesian-inspired, iterative diagnostic framework that uses point-of-care-focused echocardiography to evaluate the conditions of patients with acute cardiorespiratory failure and suspected severe left-sided valvular lesions. This overcomes the current limitation that diagnostic protocols cannot perform sufficient quantitative assessments of the left-sided heart valves.
Khan et al., 2016 [39]16 to 41-week-old fetusesNorwayLMIC and rural/ObstetricsComputer vision (OpenCV, Kalman-based tracker)Develop an automatic method for localization of the presented section through the abdomen and measurement of the mean abdominal diameter (MAD) of a fetus designed to be operational in both traditional ultrasound settings and the rural areas of low- and middle-income countries.
Heuvel et al., 2019 [40]Pregnant womenEthiopiaLMIC/ObstetricsDeep learningPresent a system that can automatically estimate the fetal head circumference (HC) from the point-of-care ultrasound image data obtained using the obstetric sweep protocol (OSP) to overcome the limitation of pregnant women in developing countries having no access to ultrasound imaging as it requires a trained sonographer to acquire and interpret the image.
Jafari et al., 2019 [41]N/ACanadaLack of computing resources/CardiologyDeep learningPresent a computationally efficient deep learning-based application for accurate left ventricular ejection fraction (LVEF) estimation. This application runs in real time on Android mobile devices that have either a wired or wireless connection to a cardiac POCUS device, making it suitable for a resource-limited environment.
Al-Zogbi et al., 2021 [42]N/AUnited StatesEmergency/PulmonologyDeep learningPropose an autonomous robotic solution that enables point-of-care ultrasound scanning of COVID-19 patients’ lungs for diagnosis and staging through the development of an algorithm that can estimate the optimal position and orientation of an ultrasound probe on a patient’s body to image target points in lungs. This is useful in low-resource settings such as emergency situations where contact between healthcare workers and patients is not feasible (e.g., COVID-19 infection risk).
Blaivas et al., 2020 [43]N/AUnited States, CanadaLack of computing resources/Various departmentsDeep learningCreate and test a “do-it-yourself” (DIY) deep learning algorithm to classify ultrasound images to enhance the quality assurance workflow for POCUS programs to enable those in low-resource settings to leverage AI applications for medical images usually owned by large and well-funded companies.
Baloescu et al., 2020 [44]N/AUnited StatesLack of trained personnel/Emergency medicineDeep learningDevelop and test a deep learning (DL) algorithm to quantify the assessment of B-lines in point-of-care lung ultrasound, which helps in diagnosing shortness of breath, a very common chief complaint in the emergency department (ED). This is useful in resource-limited settings where not enough experienced users are available as B-line identification and quantification can be a challenging skill for novice ultrasound users.
Cheema et al., 2021 [45]PatientsUnited StatesLack of trained personnel/CardiologyDeep learningPresent the novel use of a deep learning-derived technology trained on the skilled hand movements of cardiac sonographers that guides novice users to acquire high-quality bedside cardiac ultrasound images. This technology can have a role in resource-limited settings where cardiac sonographers are not readily available.
Blaivas et al., 2021 [46]N/AUnited StatesLack of data/CardiologyDeep learningUses unrelated ultrasound window data (only apical 4-chamber views) to train a point-of-care ultrasound (POCUS) machine learning algorithm with fair mean absolute error (MAE) using data manipulation to simulate a different ultrasound examination. The outcome measured is the left ventricular ejection fraction. This may help future POCUS algorithm designs to overcome a paucity of POCUS databases.
Cho et al., 2024 [47]FetusesSouth KoreaLack of computing resources/ObstetricsDeep learningProposes deep learning-based efficient automatic fetal biometry measurement method for the system-on-chip (SoC) solution. Results show feasibility in low-resource hardware settings such as portable ultrasound systems.
Zemi et al., 2024 [48]N/AUnited StatesRemote/OncologyDeep learningExplores the feasibility of integrating artificial intelligence algorithms for breast cancer detection into a portable, point-of-care ultrasound device. Achieved a performance benchmark of at least 15 frames/second and suggests the usefulness of the proposed framework in remote settings.
Karlsson et al., 2023 [49]N/AUnited StatesLack of computing resources and LMIC/ObstetricsDeep learningEarly detection of breast cancer is crucial for reducing morbidity and mortality, yet access to breast imaging is limited in low- and middle-income countries. This study explores the use of pocket-sized portable ultrasound devices (POCUS) combined with deep learning algorithms to classify breast lesions as a cost-effective solution. This study utilized a dataset of 1100 POCUS images, enhanced with synthetic images generated by CycleGAN, and achieved a high accuracy rate with a 95% confidence interval for AUC between 93.5% and 96.6%.
MacLean et al., 2021 [50]COVID-19 patientsCanadaLack of computing resources/PulmonologyDeep learningIntroduces COVID-Net US, a deep convolutional neural network for COVID-19 screening using lung POCUS images. This network is a highly efficient and a high-performing deep neural network architecture that is small enough to be implemented on low-cost devices, allowing for limited additional resources needed when used with POCUS devices in low-resource environments.
Adedigba et al., 2021 [51]COVID-19 patientsNigeriaLack of computing resources/PulmonologyDeep learningDevelops a tele-operated robot to be deployed for diagnosing COVID-19 at the Nigerian National Hospital, Abuja, driven by a deep learning-based algorithm that automatically classifies lung ultrasound images for rapid, efficient, and accurate diagnosis of patients. The gantry-style positioning unit of the robot combined with the efficient deep learning algorithm is less costly to fabricate and is better suited for low-resource regions than robotic arms used in the status quo.
Pokaprakarn et al., 2022 [52]Pregnant womenUnited States,
Zambia
Lack of computing resources and LMIC/ObstetricsDeep learningUltrasound is crucial for estimating gestational age but is limited in low-resource settings due to high costs and the need for trained sonographers. This study develops a deep learning algorithm based on the blind ultrasound sweeps acquired from 4695 pregnant women in North Carolina and Zambia, showing a mean absolute error (MAE) of 3.9 days compared with 4.7 days for standard biometry. The AI model’s accuracy is comparable to trained sonographers, even when using low-cost devices and untrained users in Zambia.
Karnes et al., 2021 [53]COVID-19 patientsUnited StatesLack of computing resources and LMIC/PulmonologyDeep learningIntroduces an innovative ultrasound imaging point-of-care (PoC) COVID-19 diagnostic system that employs few-shot learning (FSL) to create encoded disease state models. The system uses a novel vocabulary-based feature processing method to compress ultrasound images into discriminative descriptions, enhancing computational efficiency and diagnostic performance in PoC settings. The results suggest the ability of the FSL-based system in extending the accessibility of rapid LUS diagnostics to resource-limited clinics.
Viswanathan et al., 2024 [54]FetusesUnited States, ZambiaLack of computing resources and LMIC/ObstetricsDeep learningDevelops a deep learning AI model to estimate gestational age (GA) from brief ultrasound videos (fly-to cineloops) with the aim of improving the quality and consistency of obstetric sonography in low-resource settings by the model, which outperformed expert sonographers in GA estimation and can flag grossly inaccurate measurements, providing a no-cost quality control tool that can be integrated into both low-cost and commercial ultrasound devices. This innovation is crucial for enhancing ultrasound access and accuracy, particularly for novice users in low-resource environments.
Zeng et al., 2024 [55]COVID-19 patientsCanadaLack of computing resources and lack of trained personnel/PulmonologyDeep learningProposes COVID-Net L2C-ULTRA, a deep neural network framework designed to handle the heterogeneity of ultrasound probes by using extended linear-convex ultrasound augmentation learning. Experimental results show significant performance improvements in test accuracy, AUC, recall, and precision, making it an effective tool for enhancing COVID-19 assessment in resource-limited settings owing to its portability, safety, and cost-effectiveness.
Abhyankar et al., 2024 [56]N/AIndiaLack of computing resources/Various departmentsMachine learningPresents an intelligent decision support system for point-of-care ultrasound imaging, emphasizing resource-limited healthcare settings. Utilizing a decision tree algorithm on a Raspberry Pi-powered portable ultrasound device enhances image quality and diagnostic accuracy by making informed decisions during image capture and processing. Continuous data collection and user input allow for adaptive learning and optimization, ensuring reliability and regulatory compliance. This system aims to provide cost-effective, high-quality ultrasound imaging, improving healthcare accessibility and quality in underserved areas.
Madhu et al., 2023 [57]COVID-19 patientsIndiaLack of equipment/PulmonologyDeep learningProposes an optimized Xception convolutional neural network (XCovNet) for COVID-19 detection from POCUS images. Depth-wise spatial convolution layers are used to accelerate convolution computation in the XCovNet model, which performs better on POCUS imaging than on other models, including COVID-19 classification. The results of the trial demonstrate that the proposed technique achieves the best performance among recent deep learning studies on POCUS imaging. POCUS is a viable option for developing COVID-19 screening systems based on medical imaging in resource-constrained settings where traditional testing methods may be scarce and where CT or X-ray screening is unavailable.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, S.; Fischetti, C.; Guy, M.; Hsu, E.; Fox, J.; Young, S.D. Artificial Intelligence (AI) Applications for Point of Care Ultrasound (POCUS) in Low-Resource Settings: A Scoping Review. Diagnostics 2024, 14, 1669. https://doi.org/10.3390/diagnostics14151669

AMA Style

Kim S, Fischetti C, Guy M, Hsu E, Fox J, Young SD. Artificial Intelligence (AI) Applications for Point of Care Ultrasound (POCUS) in Low-Resource Settings: A Scoping Review. Diagnostics. 2024; 14(15):1669. https://doi.org/10.3390/diagnostics14151669

Chicago/Turabian Style

Kim, Seungjun, Chanel Fischetti, Megan Guy, Edmund Hsu, John Fox, and Sean D. Young. 2024. "Artificial Intelligence (AI) Applications for Point of Care Ultrasound (POCUS) in Low-Resource Settings: A Scoping Review" Diagnostics 14, no. 15: 1669. https://doi.org/10.3390/diagnostics14151669

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop