Next Article in Journal
Utility and Safety of Bronchoscopic Cryotechniques—A Comprehensive Review
Next Article in Special Issue
Machine Learning Radiomics Signature for Differentiating Lymphoma versus Benign Splenomegaly on CT
Previous Article in Journal
Enhancing Cervical Pre-Cancerous Classification Using Advanced Vision Transformer
Previous Article in Special Issue
Integrating Artificial Intelligence Tools in the Clinical Research Setting: The Ovarian Cancer Use Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of the Clinical Applications of Artificial Intelligence in Abdominal Imaging

by
Benjamin M. Mervak
,
Jessica G. Fried
and
Ashish P. Wasnik
*
Department of Radiology, University of Michigan—Michigan Medicine, 1500 E. Medical Center Dr., Ann Arbor, MI 48109, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Diagnostics 2023, 13(18), 2889; https://doi.org/10.3390/diagnostics13182889
Submission received: 25 May 2023 / Revised: 23 August 2023 / Accepted: 5 September 2023 / Published: 8 September 2023
(This article belongs to the Special Issue Artificial Intelligence in Radiology 2.0)

Abstract

:
Artificial intelligence (AI) has been a topic of substantial interest for radiologists in recent years. Although many of the first clinical applications were in the neuro, cardiothoracic, and breast imaging subspecialties, the number of investigated and real-world applications of body imaging has been increasing, with more than 30 FDA-approved algorithms now available for applications in the abdomen and pelvis. In this manuscript, we explore some of the fundamentals of artificial intelligence and machine learning, review major functions that AI algorithms may perform, introduce current and potential future applications of AI in abdominal imaging, provide a basic understanding of the pathways by which AI algorithms can receive FDA approval, and explore some of the challenges with the implementation of AI in clinical practice.

1. Introduction

The past decade has seen a rapid acceleration in interest and hype regarding artificial intelligence (AI), machine learning (ML), deep learning (DL), and potential applications in healthcare. As a gross measure, PubMed results on AI or ML increased more than 20-fold from roughly 600 scientific papers in 2009 to over 12,500 papers in 2019 [1]. Subsequently, the number of algorithms receiving approval from the United States Food and Drug Administration (FDA) under the umbrella of “software as a medical device” (SaMD) has also rapidly increased, from 7 approved algorithms in early 2016 to more than 200 in early 2023.
As AI and ML algorithms represent adaptable mathematical tools rather than a single static instrument, they have been heralded in widespread applications within radiology departments, with proposed uses including image quality control, augmentation and prioritization of workflows and worklists, segmentation of organs and lesions, detection of incidental findings, and assisting with classification [2]. The earliest clinical applications were primarily seen in the areas of cardiothoracic imaging, neuroradiology, and breast imaging [3], with algorithms trained to analyze radiographic or mammographic images. Clinical applications of artificial intelligence and machine learning in abdominal imaging have been slower to develop, potentially due to the large number of organs/systems evaluated, reliance on multiple cross-sectional modalities, and variety of possible pathologies. However, recent years have seen the increasing development and release of algorithms for abdominal imagers.
In this manuscript, we provide an overview of artificial intelligence and machine learning techniques, review major functions that AI algorithms can perform, introduce current and potential future applications of AI in abdominal imaging, outline pathways by which AI algorithms can receive FDA approval, and explore some of the challenges faced when implementing AI in clinical practice.

2. Materials and Methods

For this review, a literature search was conducted using a combination of Ovid Medline, PubMed, and Google Scholar, highlighting articles from 2017–2022. Key terms included “clinical applications”, “artificial intelligence”, “machine learning”, “radiomics”, “texture analysis”, “incidental findings”, “segmentation”, “detection”, “abdominal CT”, “abdominal MR”, “radiology”, “body imaging”, and “abdominal imaging”. Non-English language results were excluded. Titles and abstracts were reviewed and selected based on their relevance by authors, all of whom are fellowship-trained in abdominal imaging with 3, 7, and 15 years of post-certification experience.
An assessment of algorithms cleared by the United States Food and Drug Administration (FDA) was also conducted at the time of manuscript preparation (April 2023). The AI Central website hosted by the American College of Radiology Data Science Institute was consulted [4], in addition to the FDA registry and non-FDA publicly available resources listed on the FDA website for Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices [5]. The AI for Radiology website provided an additional resource of interest for commercially available AI algorithms in Europe, although not all algorithms listed there are approved for clinical use in the United States [6].
At the time of manuscript preparation, a total of 34 FDA-cleared SaMD products were identified within abdominal imaging, with some software having been cleared for use with more than one modality. Of these, 13 were for use with computed tomography (CT), 19 for magnetic resonance imaging (MRI), 0 for positron emission tomography (PET), 2 for ultrasound (US), and 3 for radiography (XR) or mammography. Three were for use in adult or pediatric patients, and thirty-one were cleared for use only in adult patients.
Due to the wide range of possible uses of AI and machine learning, this manuscript focuses on clinical applications that may be of interest to abdominal radiologists in the United States. “Upstream” applications spanning multiple radiologic divisions like CT de-noising, 3D printing, augmented viewing software, and applications in vascular imaging are not specifically addressed in this review.

3. Discussion

3.1. Terminology and Definitions

A basic understanding of the meaning, intersections, and divergence of the terms ‘artificial intelligence’, ‘machine learning’, and ‘deep learning’ is required to explore the landscape of SaMD in abdominal imaging.
Artificial intelligence is the umbrella term used to encompass the development of systems capable of simulating human intelligence through learning and problem-solving. There are multiple methods that can be categorized as artificial intelligence [1,7].
Formal methods are one of the earliest techniques used to create artificial intelligence algorithms. Rather than using large datasets, formal methods use rigorous mathematical techniques to explicitly program systems in a rules-based manner, allowing responses to inputs or queries [8].
Machine learning is a subset of artificial intelligence in which systems are trained using large datasets to recognize patterns and subsequently respond to queries based on the training in relevant domains. In machine learning, most training datasets are structured, well-organized, and annotated by a ‘gold standard’ frequently requiring substantial human input. After training on these large datasets, algorithms are then able to independently respond to test datasets based upon feature mapping and statistical modeling ‘learned’ from the training datasets [1,7].
Deep learning is a subset of machine learning that utilizes layered ‘neural networks’, which allows unsupervised learning by the system using unorganized and unstructured data. The heuristics used by the system to develop responses to inputs and queries are independently developed by the system’s neural networks. Unlike classical machine learning, deep learning does not require substantial human input or supervision [1,7].

3.2. Organ Segmentation

Segmentation is defined as dividing a whole into parts or sections, which can have many applications in abdominal radiology. Much of an abdominal radiologist’s time is spent differentiating normal from abnormal tissues, identifying the organ from which an abnormality arises, or measuring boundaries and sizes. Segmentation, whether outlining organs or lesions, volumetrically tracking lesions over time, or providing an objective assessment of disease severity, was one of the first applications of AI in radiologic imaging [9]. Although precise, manual segmentation may be tedious and time-consuming. AI algorithms can expedite segmentation, either requiring no human input (automatic segmentation) or requiring limited human input during the pre-processing and/or post-processing phases (semi-automatic segmentation [10]).
To gauge the performance of a segmentation algorithm, a statistical test known as a Sørensen–Dice index or Dice similarity coefficient (DSC) is often used to compare the similarity of an automated segmentation with the “ground truth” based on manual segmentation performed by a radiologist or other trained individual, with higher DSCs indicating better performance. Segmentation is typically most accurate in organs that are discrete with less anatomical variability or abutment of surrounding structures.
Several studies have shown high accuracy of liver segmentation by AI algorithms with corresponding high Dice scores above 0.9 [11]. Many applications exist for liver segmentation, including the delineation of Couinaud liver segments, evaluation of hepatic lobar volumes before a hepatic resection or split-liver transplant donation, or assessment of liver fat fraction or iron concentration, among others. Segmentation of hepatobiliary substructures like the biliary tree, hepatic vasculature, and gallbladder has also been explored more recently [12,13].
Segmentation of genitourinary structures is also generally accurate, with reported DSCs for segmentation of the kidneys, bladder, and prostate reported to be above 0.9 [14,15,16]. Clinical uses include many oncologic applications before partial or radical nephrectomy, calculation of prostate-specific antigen (PSA) density, radiation therapy planning for prostate cancer, or partial cystectomy. Automated segmentation may also allow chronic disease processes to be more easily tracked, for example, when evaluating the total kidney volume in patients with autosomal dominant polycystic kidney disease [15].
The gastrointestinal system is challenging to segment due to the anatomical variability, complex configuration, abutment of other organs, and close apposition of discontinuous bowel loops to one another. Nonetheless, segmentation of the bowel has been pursued, with higher Dice scores found for rectum (DSCs > 0.85) [17,18,19] and colon (DSCs > 0.80) [20] than for small bowel (DSCs > 0.65) [20,21,22]. The clinical utility of segmenting the bowel includes several oncologic applications, like pre-radiation planning before pelvic radiation therapy. Other uses, like the detection and classification of inflammatory bowel disease, mucosal lesions, or bowel obstruction, have also been explored [23].
Other miscellaneous organs with high Dice scores following segmentation (DSCs~0.95) include the spleen [24], visceral or body wall adipose [25], and muscle tissue [26]. There are fewer clinical applications for splenic segmentation, although volumetric analysis of spleen size over time could be helpful when tracking disease processes that can cause splenomegaly. While not commonly performed by radiologists, automated segmentation of adipose or muscle tissue allows for opportunistic screening, as discussed below.

3.3. Lesion Detection

Radiologists assess and analyze images to identify, detect, and characterize disease processes. Highly sensitive early and reliable detection of lesions can play a critical role in the treatment plan for patients; for example, liver metastasis detection is paramount in emerging innovative surgical strategies for minimally invasive simultaneous resections of synchronous liver metastases in primary colorectal cancer [27]. AI has proven capable of differentiating normal and abnormal imaging findings based on algorithmic statistical models and can often provide a quantitative assessment of disease [28]. In the last decade, DL models have shown a substantial impact on the detection, segmentation, and classification of disease processes on imaging [29,30]. DL methods have been successfully used in the detection and classification of malignant lesions in various organs and imaging modalities, such as thyroid nodules on US, pulmonary nodules on CT, breast lesions on mammograms and ultrasound, pancreatic cancer on CT, and prostate cancers on MRI [28,31,32,33].
Chen and Wu et al. showed that a deep learning-based CAD tool could differentiate CT studies with and without pancreatic cancer with a sensitivity of 89.7% and a specificity of 92.8% [32]. AI-based tools have been used in segmentation, automated detection, and risk stratification with a similar area under the receiver operating characteristic curve (AUC) to expert readers in identifying clinically significant prostate cancer (81.9% expert reader vs. 83.2% CAD) [33,34]. Additionally, CAD was shown to reduce reading time in detecting suspicious prostate cancer lesions on multiparametric MRI to 2.7 min compared to 3.5 min for experienced readers and 6.7 min for moderately experienced readers [35]. Deep learning algorithms such as U-Net and fully conventional networks have been used in liver tumor detection, with segmentation-based CNN increasing the new liver tumor detection rate from 72% to 86% [36,37].

3.4. Detection of Incidental Findings

Incidental findings are encountered daily during the interpretation of abdominal imaging studies, particularly CT and MRI. Although many potential incidental findings exist within the abdomen and pelvis (e.g., cystic adnexal lesions, renal lesions, or adrenal lesions), there are also incidental findings that may be clinically important but originate within other organ systems included in the field of view—for example, incidental pulmonary emboli at the lung bases on a CT of the abdomen and pelvis. Due to the potential for these findings to be missed [38] and the possible need for urgent treatment or other follow-up, routine analysis of these areas by AI algorithms may be helpful to abdominal radiologists.
At the time of manuscript preparation, commercially available AI algorithms directed at incidental findings included the detection of incidental pulmonary emboli at the lung bases, free intraperitoneal air, vertebral compression fractures, or aortic dissection. Additional investigative work has also been directed toward the automated detection of various findings, including free fluid or focal inflammation (fat stranding) [39], cystic pancreatic lesions [40], vertebral body fractures [41,42], and IVC filters on radiographs [43], among others. While there is some overlap between detecting incidental findings and intentionally using a CT examination for opportunistic screening, automated acquisition of metrics on body composition, osteopenia, and others is further discussed below.

3.5. Characterization of Findings

While the detection of relevant imaging findings was a primary focus of early artificial intelligence models in radiology, there has been increasing effort to develop AI algorithms capable of not just detecting but also characterizing imaging findings. AI-assisted characterization has the potential to improve radiologist accuracy, throughput, and efficiency.
Most work in this space to date has focused on developing AI capable of differentiating benign from malignant lesions; however, with increasing investment in radiomic techniques like those discussed below, tumor subtype and phenotype characterization is emerging as an additional area of research interest. While there is only a single FDA-approved algorithm designed to perform a lesion characterization function (ProstatID, BotImage, Omaha, NE, USA), there has been ample published research on potential abdominal imaging applications of AI-assisted lesions characterization in liver, biliary, kidney, adrenal glands, ovaries, and prostate.
Multiple algorithms have been developed to characterize focal liver lesions as benign or malignant on US and CT, showing an AUC comparable to radiologist performance [44,45,46]. Other research focused on the automated differentiation of hepatocellular carcinoma from other malignant tumors, such as intrahepatic cholangiocarcinoma [47,48]. Yin et al. developed a deep learning convoluted neural network (CNN) capable of distinguishing gallbladder cancer from benign gallbladder diseases on CT (AUC, 0.81; 95% CI 0.71–0.92) [49]. Another algorithm was developed by Nikpanah et al. to distinguish clear cell renal cancer from oncocytoma, with the algorithm outperforming expert radiologists included in the study (AI performance: AUC 0.81, PPV 0.78, and NPV 0.86) [50]. There have been at least three algorithms developed to date for adrenal lesion characterization, again with performance similar to or better than radiologists in published studies [51]. There have been successful proof-of-concept investigations using texture-based ML to distinguish malignant from benign ovarian lesions at CT [52]. Algorithms have also been developed to distinguish low- and high-aggressivity prostate cancer on multiparametric MRI with AUC in the range of 0.81 [53,54].

3.6. Opportunistic Screening

Opportunistic screening is collecting and evaluating imaging data for purposes beyond that of the primary clinical indication for the study and using imaging-based biomarkers to identify and predict individuals at risk of potential adverse clinical outcomes [55]. Visceral adipose tissue and subcutaneous adipose tissue are associated with changes in cardiovascular disease risk factors [56]. Pickhardt et al. showed that DL and feature-based automated extraction of CT biomarkers outperformed a principal clinical parameter of body mass index (BMI) in predicting major cardiovascular events in an asymptomatic outpatient cohort of 9223 adults undergoing low-dose CT screening for colorectal cancers. In this study, automated algorithms were used to quantify five parameters: aortic calcification, muscle density, visceral/subcutaneous fat ratio, liver density, and vertebral density [55]. In another study, a multivariable support vector machine model using the CT attenuation of multiple bones was used to predict osteoporosis/osteopenia [57]. AI-based opportunistic assessment of CT studies for hepatic parenchymal attenuation, surface nodularity, and volumetry has shown the feasibility of assessing fat and iron overload as well as quantitative evaluation of hepatomegaly [58,59,60]. Progressive sarcopenia (loss of muscle mass, function, or quality) has been shown to negatively predict overall and progression-free survival in colorectal cancer patients undergoing serial CTs [61] and is another feature that can be reliably extracted from images by trained AI algorithms [62].

3.7. Radiomics, Texture Analysis, and Quantification

Radiomics is the extraction of quantitative imaging features at the cellular or tissue level that are beyond visual perception and correlating the features to the ground truth or a specific endpoint to improve diagnostic and prognostic accuracy [63,64]. The major steps in radiomics include image acquisition, segmentation, texture feature extraction, data analysis, prediction model construction, and validation [65]. The texture feature analysis can be classified into shape-based features, histogram features, grey-level co-occurrence matrix-based features, grey-level run-length matrix-based features, and other higher-order statistical features [64]. Various studies have shown the potential of CT- and MR-based radiomics models in diagnosis, differentiation, treatment selection, and prognostic outcomes in hepatocellular carcinoma [66,67,68]. DL-based radiomic MRI features, along with clinical characteristics, have shown an accuracy of 84% and sensitivity of 94% in the preoperative prediction of tumor deposits in patients with rectal cancer [69]. CT-based radiomic texture features have shown good performance in grading and differentiating renal tumor subtypes [70]. Mukherjee et al. showed the ability of radiomics-based ML models to detect pancreatic ductal adenocarcinoma on CT imaging at a substantial lead time (median 398 days) before the clinical diagnosis [71]. Radiogenomics is another evolving area in oncologic imaging where qualitative and quantitative imaging features are extracted and correlated with the genetic profile of the tissue to generate prediction models for disease outcomes [72,73]. Both radiomics and radiogenomics have shown promising results in predicting treatment response, nodal status, and metastasis in colorectal cancer, as well as the potential for KRAS mutations [69,74,75,76].
While radiomics may improve future prognostication based on tumor characteristics, at a practical level, simply improving radiologists’ accuracy in identifying metastatic lesions is arguably one of the most clinically promising applications of AI lesion characterization, given the impact that metastatic disease evaluation can have prognostication. For example, Bian et al. have developed an algorithm designed to predict lymph node metastases in patients with pancreatic adenocarcinoma. In their study, the algorithm showed superior performance (AUC, 0.92; 95% CI 0.88–0.96) in predicting lymph node metastases compared to radiologists (AUC, 0.58; 95% CI 0.53–0.63) [77]. Future AI research focused on improving the accuracy, sensitivity, and specificity of staging for abdominal oncologic processes is likely to yield additional prognostication benefits.

3.8. FDA Approval and 510(k) Premarket Notification

The United States Food and Drug Administration (FDA) plays a critical role in ensuring the safety and effectiveness of medical products. Receiving clearance from the FDA is a key step for device manufacturers before legally marketing medical devices in the United States. Although a full description of the application, steps, and required documents is outside the scope of this clinical review, understanding certain details may be beneficial to radiologists as they approach vendors and explore available algorithms.
After being outlined in a 2019 discussion paper [78] and modified based on feedback, a plan entitled the “Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan” [79] was created in 2021. Under this plan, medical products that rely on AI algorithms to treat, diagnose, cure, mitigate, or prevent disease or other conditions are regulated (undergo “premarket review”) as “software as a medical device” (SaMD) using one of three pathways, each with specific data requirements [79]: premarket notification also known as 510(k) [80], premarket approval [81], and de novo classification [82]. The level of impact and risk on patients helps determine which review pathway a product should follow [83,84]. Many AI algorithms in radiology obtain approval via the premarket notification/510(k) pathway, which requires manufacturers to demonstrate that their product is “substantially equivalent” to current legally marketed devices [80]. As some AI algorithms are “unlocked” and can learn and change over time, additional review may be required according to a “predetermined change control plan” in which the ways and areas of planned AI learning/modification are outlined by device manufacturers [79].

3.9. Challenges

3.9.1. Clinical Translation

There are many barriers, pitfalls, and challenges to the clinical translation of artificial intelligence algorithms under development. Some of these are challenges faced by all translational research endeavors; however, we will focus our discussion on barriers unique to artificial intelligence applications.
Generalizability has been an insurmountable challenge for many algorithms, making the leap from the computer science lab to clinical application. The training, validation, and even test data for algorithms shown to have spectacular performance in research studies often underestimate the variability of real-world data. It is estimated that <10% of AI models described in the literature are tested on external data [85]. Variability in imaging modality vendor, imaging parameters or protocol, and patient-related factors have all been shown to have deleterious effects on the performance statistics of artificial intelligence algorithms [86]. The challenges of generalizability are most pronounced in classical machine learning applications; however, the same challenges have been seen to affect deep learning techniques [86]. External validation partnerships between labs, increased sharing of de-identified datasets and algorithms, and development of centralized aggregate ‘real life’ clinical test data could help address generalizability concerns.
A critical consideration related to generalizability is the potential for bias introduced into artificial intelligence models, whether this occurs unintentionally in the construct of training datasets or if it develops ‘organically’ in the deep learning process of layered neural networks. Research has shown that artificial intelligence algorithms are capable of bias and are at risk of perpetuating discrimination and/or healthcare disparities if not actively mitigated [87]. Given challenges in mitigating bias in US healthcare regardless of the use of artificial intelligence, special attention to this concern is warranted as artificial intelligence algorithms are developed and deployed into clinical practice. Research specifically addressing the risks and implications of unintended bias in AI and how to avoid and mitigate these biases in algorithms is desperately needed.
Automation bias is another pitfall cited as a major potential problem if artificial intelligence is widely adopted into clinical practice. Automation bias refers to the propensity for humans to favor suggestions from automated decision-making systems, often while ignoring sound, contradictory, non-automated information. According to researchers, there is a human expectation that artificial intelligence performs at perfect or near-perfect levels; however, human beings are likely to judge failures in artificial intelligence less harshly than the failures of humans [88,89]. These heuristics contribute to automation bias. The dangers of automation bias should encourage clinical practices to adopt artificial intelligence into their workflows to develop automation-bias mitigation strategies. Unfortunately, this is currently an area of investigation with significant unmet needs.
A unique challenge in deep learning algorithms is that the decision-making process of the system is typically opaque. Research has shown the public is overall distrusting of AI technology, with a reported 63% of citizens from five countries being unwilling to or ambivalent about trusting AI in healthcare [90]. The opaque ‘black-box’ nature of deep learning neural networks is a particularly challenging concept to overcome in building trust in artificial intelligence technology in the public sphere [91]. Consumers may demand increasingly stringent regulatory standards for the review and approval of artificial intelligence technology. Issues related to accountability, culpability, fairness, and nondiscrimination will be equally important to the public as algorithm accuracy and performance.

3.9.2. Governance

The landscape of AI governance is likely to continue to evolve significantly over the next decade. Unlike governance for prescription drugs or medical devices, governance of artificial intelligence systems cannot be simply satisfied through the initial FDA evaluation and approval. Outside of the unique governance challenges related to the black-box nature of many deep learning techniques, artificial intelligence algorithms exhibit a propensity for dataset shift, resulting in a need for ongoing evaluation and oversight. Data shift occurs when training and test distributions differ. Triggers for data shift can include but are not limited to (a) new or different data acquisition techniques upstream of the model, (b) changes in IT practices upstream from the model (e.g., data structuring and naming conventions), (c) new software and IT infrastructure on which the model interacts with, and (d) changing population characteristics or demographics [92].
In addition, governance of AI requires consideration of privacy issues as they relate to the transmission, processing, and storage of personal data. In the European Union, the General Data Protection Regulation (GDPR) became law in 2018, defining individuals’ rights in a digital age and outlining methods to ensure compliance [93]. In the United States, medical data regulation is addressed by the Health Insurance Portability and Accountability Act (HIPAA).
It is likely that optimal models of governance for artificial intelligence in healthcare will ultimately require cooperation and shared responsibility between governmental/regulatory bodies and clinical practices deploying these systems.

3.9.3. Cost

The costs to implement artificial intelligence may also create a barrier to bringing algorithms into real-world clinical workflows, particularly in the private practice setting where research and educational uses may not help to justify the costs of ownership. Costs exist in each step of the purchasing and installation process, from time spent by project managers and IT professionals to create and review a request for proposals (RFP), negotiation with preferred vendors, expenses to set up any necessary servers or build interfaces with the radiology information system (RIS) or picture archiving and communication system (PACS), operating expenses for utilizing the algorithm (most commonly an annual subscription or per-case fee rather than a single capital purchase [94]), and costs to monitor and ensure that hardware and software packages are continuing to perform optimally.
Under the predominantly fee-for-service model in the United States, few avenues currently exist to directly generate revenue using AI algorithms, as most AI analysis does not constitute a billable service. Instead, savings are often realized via improvements in efficiency, reduction in missed pathology and the potential for patient harm, or reduction in tedious work for radiologists, technologists, and other radiology department personnel. Notably, some inroads toward direct reimbursement have been made with the United States Centers for Medicare and Medicaid Services (CMS) as they consider the potential impacts that AI may have on the quality of patient care. CMS has recently developed the “New Technology Add-on Payment” (NTAP), which can help share the cost of new technologies between CMS and hospitals. Under this pathway, an example of an early—albeit currently temporary—success story for radiologists is the potential to obtain payment from CMS when an AI algorithm is used during the diagnosis and treatment of large vessel occlusion [95].

3.9.4. Risk

As AI algorithms are tasked with more advanced processes (e.g., lesion characterization or opportunistic screening), many ethical and liability questions arise. In part, these concerns may also be driven by the ‘black-box’ nature of an AI algorithm, which outputs a decision and potentially a level of confidence but offers only limited ability to explore the reasoning behind the output. Additionally, when an adverse outcome like a missed diagnosis occurs, there are multiple involved parties, which may include the algorithm vendor, radiologist, or governance committee monitoring algorithm performance. At present, the radiologist bears the responsibility for rendering a final opinion on a radiologic study, regardless of what the AI output is. However, automation bias (as discussed above) may cloud judgment. If diagnostic physicians over-rely on AI outputs, there is a risk that diagnostic accuracy could decrease and result in worsened patient care in cases of AI algorithm failure [91,96,97,98,99].
The distribution of risk may also be more complex in opportunistic screening—for example, if visceral fat is automatically segmented, quantified, and recorded in the radiology report or electronic medical record, who is responsible for reviewing and making medical decisions based on the potential risk-stratification data imparted? Outputs from algorithms may reach much farther than the radiology department alone, and it will be important to hold conversations with all involved parties as algorithms are implemented within a practice or health system.
Other risk-related topics discussed in the recent literature include data security, given the large volume of patient information needed to drive AI algorithms [99,100], the potential for malicious misuse of AI to create or alter images [101,102], and the potential for AI to impart bias based on race, gender, or other factors [87].

4. Conclusions

While we have already witnessed the explosive growth of artificial intelligence applications in radiology over the past decade, it is likely that we have only seen the tip of the iceberg. There is still considerable room for growth in artificial intelligence applications in radiology, particularly in abdominal imaging. At the time of manuscript preparation, there were 34 FDA-approved algorithms with applications applicable to abdominal imaging; however, there have been hundreds of research studies with promising algorithms published in just the past five years.
Advances in segmentation and lesion detection have paved the way for more advanced lesion-characterization and radiomics applications of artificial intelligence to be explored. Incidental finding detection and opportunistic screening represent two intriguing areas of algorithm development with potential benefits to all radiologists, regardless of specialty. Algorithms aimed at improving disease prognostication have the potential to be incredibly impactful in clinical practice, and we are likely to see increased investment in this space in the coming years.
The FDA approval process and post-approval governance of artificial intelligence algorithms will continue to evolve as more SaMD comes to market. The myriad barriers, challenges, and pitfalls in AI algorithm development will inform this evolution as researchers, regulatory bodies, and clinical practices work together to address concerns related to the generalizability of algorithms, bias/discrimination in algorithms, and human–machine interaction factors. Ultimately, innovation will be driven by reform in reimbursement and the proliferation of legal precedent in the space as artificial intelligence becomes a part of our daily practice environment.

Author Contributions

All authors contributed substantially to this work, including conceptualization, investigation, writing—original draft preparation, and writing—review and editing. As the senior author, additional supervision was provided by A.P.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

B.M.M. and J.G.F.: No conflict of interest. A.P.W.: No conflicts of interest related to this manuscript. Unrelated disclosures for A.P.W.: Royalty, Book, Elsevier Inc., Royalty, Intellectual Property (IP), licensed by the University of Michigan to Applied Morphomics, Inc. Research support, Sequana Medical, NV, payable to the University of Michigan.

References

  1. Mesko, B.; Görög, M. A Short Guide for Medical Professionals in the Era of Artificial Intelligence. NPJ Digit. Med. 2020, 3, 126. [Google Scholar] [CrossRef]
  2. Choy, G.; Khalilzadeh, O.; Michalski, M.; Do, S.; Samir, A.E.; Pianykh, O.S.; Geis, J.R.; Pandharipande, P.V.; Brink, J.A.; Dreyer, K.J. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 2018, 288, 318–328. [Google Scholar] [CrossRef]
  3. Tariq, A.; Purkayastha, S.; Padmanaban, G.P.; Krupinski, E.; Trivedi, H.; Banerjee, I.; Gichoya, J.W. Current Clinical Applications of Artificial Intelligence in Radiology and Their Best Supporting Evidence. J. Am. Coll. Radiol. 2020, 17, 1371–1381. [Google Scholar] [CrossRef] [PubMed]
  4. American College of Radiology Data Science Institute AI Central. Available online: https://aicentral.acrdsi.org/ (accessed on 1 December 2022).
  5. United States Food and Drug Administration. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. Available online: http://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices (accessed on 1 December 2022).
  6. AI for Radiology. Available online: www.AIforRadiology.com (accessed on 15 February 2023).
  7. Chartrand, G.; Cheng, P.M.; Vorontsov, E.; Drozdzal, M.; Turcotte, S.; Pal, C.J.; Kadoury, S.; Tang, A. Deep Learning: A Primer for Radiologists. RadioGraphics 2017, 37, 2113–2131. [Google Scholar] [CrossRef] [PubMed]
  8. Langley Formal Methods Program: What Is Formal Methods. Available online: https://shemesh.larc.nasa.gov/fm/fm-what.html (accessed on 25 July 2023).
  9. Lee, S.; Summers, R.M. Clinical Artificial Intelligence Applications in Radiology: Chest and Abdomen. Radiol. Clin. N. Am. 2021, 59, 987–1002. [Google Scholar] [CrossRef]
  10. Ramkumar, A.; Dolz, J.; Kirisli, H.A.; Adebahr, S.; Schimek-Jasch, T.; Nestle, U.; Massoptier, L.; Varga, E.; Stappers, P.J.; Niessen, W.J.; et al. User Interaction in Semi-Automatic Segmentation of Organs at Risk: A Case Study in Radiotherapy. J. Digit. Imaging 2016, 29, 264–277. [Google Scholar] [CrossRef]
  11. Wang, K.; Mamidipalli, A.; Retson, T.; Bahrami, N.; Hasenstab, K.; Blansit, K.; Bass, E.; Delgado, T.; Cunha, G.; Middleton, M.S.; et al. Automated CT and MRI Liver Segmentation and Biometry Using a Generalized Convolutional Neural Network. Radiol. Artif. Intell. 2019, 1, 180022. [Google Scholar] [CrossRef] [PubMed]
  12. Ivashchenko, O.V.; Rijkhorst, E.-J.; Ter Beek, L.C.; Hoetjes, N.J.; Pouw, B.; Nijkamp, J.; Kuhlmann, K.F.D.; Ruers, T.J.M. A Workflow for Automated Segmentation of the Liver Surface, Hepatic Vasculature and Biliary Tree Anatomy from Multiphase MR Images. Magn. Reson. Imaging 2020, 68, 53–65. [Google Scholar] [CrossRef] [PubMed]
  13. Kitrungrotsakul, T.; Han, X.-H.; Iwamoto, Y.; Lin, L.; Foruzan, A.H.; Xiong, W.; Chen, Y.-W. VesselNet: A Deep Convolutional Neural Network with Multi Pathways for Robust Hepatic Vessel Segmentation. Comput. Med. Imaging Graph. 2019, 75, 74–83. [Google Scholar] [CrossRef]
  14. Nemoto, T.; Futakami, N.; Yagi, M.; Kunieda, E.; Akiba, T.; Takeda, A.; Shigematsu, N. Simple Low-Cost Approaches to Semantic Segmentation in Radiation Therapy Planning for Prostate Cancer Using Deep Learning with Non-Contrast Planning CT Images. Phys. Medica 2020, 78, 93–100. [Google Scholar] [CrossRef]
  15. Daniel, A.J.; Buchanan, C.E.; Allcock, T.; Scerri, D.; Cox, E.F.; Prestwich, B.L.; Francis, S.T. Automated Renal Segmentation in Healthy and Chronic Kidney Disease Subjects Using a Convolutional Neural Network. Magn. Reson. Med. 2021, 86, 1125–1136. [Google Scholar] [CrossRef]
  16. Bardis, M.; Houshyar, R.; Chantaduly, C.; Tran-Harding, K.; Ushinsky, A.; Chahine, C.; Rupasinghe, M.; Chow, D.; Chang, P. Segmentation of the Prostate Transition Zone and Peripheral Zone on MR Images with Deep Learning. Radiol. Imaging Cancer 2021, 3, e200024. [Google Scholar] [CrossRef]
  17. Schreier, J.; Genghi, A.; Laaksonen, H.; Morgas, T.; Haas, B. Clinical Evaluation of a Full-Image Deep Segmentation Algorithm for the Male Pelvis on Cone-Beam CT and CT. Radiother. Oncol. 2020, 145, 1–6. [Google Scholar] [CrossRef]
  18. Zhang, Z.; Zhao, T.; Gay, H.; Zhang, W.; Sun, B. ARPM-Net: A Novel CNN-Based Adversarial Method with Markov Random Field Enhancement for Prostate and Organs at Risk Segmentation in Pelvic CT Images. Med. Phys. 2021, 48, 227–237. [Google Scholar] [CrossRef]
  19. Hamabe, A.; Ishii, M.; Kamoda, R.; Sasuga, S.; Okuya, K.; Okita, K.; Akizuki, E.; Sato, Y.; Miura, R.; Onodera, K.; et al. Artificial Intelligence–Based Technology for Semi-Automated Segmentation of Rectal Cancer Using High-Resolution MRI. PLoS ONE 2022, 17, e0269931. [Google Scholar] [CrossRef] [PubMed]
  20. Wu, W.; Lei, R.; Niu, K.; Yang, R.; He, Z. Automatic Segmentation of Colon, Small Intestine, and Duodenum Based on Scale Attention Network. Med. Phys. 2022, 49, 7316–7326. [Google Scholar] [CrossRef] [PubMed]
  21. Zhou, X.; Takayama, R.; Wang, S.; Hara, T.; Fujita, H. Deep Learning of the Sectional Appearances of 3D CT Images for Anatomical Structure Segmentation Based on an FCN Voting Method. Med. Phys. 2017, 44, 5221–5233. [Google Scholar] [CrossRef]
  22. Fu, Y.; Mazur, T.R.; Wu, X.; Liu, S.; Chang, X.; Lu, Y.; Li, H.H.; Kim, H.; Roach, M.C.; Henke, L.; et al. A Novel MRI Segmentation Method Using CNN-Based Correction Network for MRI-Guided Adaptive Radiotherapy. Med. Phys. 2018, 45, 5129–5137. [Google Scholar] [CrossRef] [PubMed]
  23. Yang, Y.; Li, Y.-X.; Yao, R.-Q.; Du, X.-H.; Ren, C. Artificial Intelligence in Small Intestinal Diseases: Application and Prospects. World J. Gastroenterol. 2021, 27, 3734–3747. [Google Scholar] [CrossRef] [PubMed]
  24. Humpire-Mamani, G.E.; Bukala, J.; Scholten, E.T.; Prokop, M.; van Ginneken, B.; Jacobs, C. Fully Automatic Volume Measurement of the Spleen at CT Using Deep Learning. Radiol. Artif. Intell. 2020, 2, e190102. [Google Scholar] [CrossRef] [PubMed]
  25. Küstner, T.; Hepp, T.; Fischer, M.; Schwartz, M.; Fritsche, A.; Häring, H.-U.; Nikolaou, K.; Bamberg, F.; Yang, B.; Schick, F.; et al. Fully Automated and Standardized Segmentation of Adipose Tissue Compartments via Deep Learning in 3D Whole-Body MRI of Epidemiologic Cohort Studies. Radiol. Artif. Intell. 2020, 2, e200010. [Google Scholar] [CrossRef]
  26. Burns, J.E.; Yao, J.; Chalhoub, D.; Chen, J.J.; Summers, R.M. A Machine Learning Algorithm to Estimate Sarcopenia on Abdominal CT. Acad. Radiol. 2020, 27, 311–320. [Google Scholar] [CrossRef]
  27. Rocca, A.; Cipriani, F.; Belli, G.; Berti, S.; Boggi, U.; Bottino, V.; Cillo, U.; Cescon, M.; Cimino, M.; Corcione, F.; et al. The Italian Consensus on minimally invasive simultaneous resections for synchronous liver metastasis and primary colorectal cancer: A Delphi methodology. Updates Surg. 2021, 73, 1247–1265. [Google Scholar] [CrossRef] [PubMed]
  28. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial Intelligence in Radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  29. Cheng, J.-Z.; Ni, D.; Chou, Y.-H.; Qin, J.; Tiu, C.-M.; Chang, Y.-C.; Huang, C.-S.; Shen, D.; Chen, C.-M. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Sci. Rep. 2016, 6, 24454. [Google Scholar] [CrossRef] [PubMed]
  30. Wang, H.; Zhou, Z.; Li, Y.; Chen, Z.; Lu, P.; Wang, W.; Liu, W.; Yu, L. Comparison of Machine Learning Methods for Classifying Mediastinal Lymph Node Metastasis of Non-Small Cell Lung Cancer from 18F-FDG PET/CT Images. EJNMMI Res. 2017, 7, 11. [Google Scholar] [CrossRef]
  31. Kooi, T.; Litjens, G.; van Ginneken, B.; Gubern-Mérida, A.; Sánchez, C.I.; Mann, R.; den Heeten, A.; Karssemeijer, N. Large Scale Deep Learning for Computer Aided Detection of Mammographic Lesions. Med. Image Anal. 2017, 35, 303–312. [Google Scholar] [CrossRef]
  32. Chen, P.-T.; Wu, T.; Wang, P.; Chang, D.; Liu, K.-L.; Wu, M.-S.; Roth, H.R.; Lee, P.-C.; Liao, W.-C.; Wang, W. Pancreatic Cancer Detection on CT Scans with Deep Learning: A Nationwide Population-Based Study. Radiology 2023, 306, 172–182. [Google Scholar] [CrossRef]
  33. Mata, L.A.; Retamero, J.A.; Gupta, R.T.; García Figueras, R.; Luna, A. Artificial Intelligence-Assisted Prostate Cancer Diagnosis: Radiologic-Pathologic Correlation. Radiographics 2021, 41, 1676–1697. [Google Scholar] [CrossRef]
  34. Gaur, S.; Lay, N.; Harmon, S.A.; Doddakashi, S.; Mehralivand, S.; Argun, B.; Barrett, T.; Bednarova, S.; Girometti, R.; Karaarslan, E.; et al. Can Computer-Aided Diagnosis Assist in the Identification of Prostate Cancer on Prostate MRI? A Multi-Center, Multi-Reader Investigation. Oncotarget 2018, 9, 33804–33817. [Google Scholar] [CrossRef]
  35. Cuocolo, R.; Cipullo, M.B.; Stanzione, A.; Romeo, V.; Green, R.; Cantoni, V.; Ponsiglione, A.; Ugga, L.; Imbriaco, M. Machine Learning for the Identification of Clinically Significant Prostate Cancer on MRI: A Meta-Analysis. Eur. Radiol. 2020, 30, 6877–6887. [Google Scholar] [CrossRef] [PubMed]
  36. Spieler, B.; Sabottke, C.; Moawad, A.W.; Gabr, A.M.; Bashir, M.R.; Do, R.K.G.; Yaghmai, V.; Rozenberg, R.; Gerena, M.; Yacoub, J.; et al. Artificial Intelligence in Assessment of Hepatocellular Carcinoma Treatment Response. Abdom. Radiol. 2021, 46, 3660–3671. [Google Scholar] [CrossRef] [PubMed]
  37. Vivanti, R.; Szeskin, A.; Lev-Cohain, N.; Sosna, J.; Joskowicz, L. Automatic Detection of New Tumors and Tumor Burden Evaluation in Longitudinal Liver CT Scan Studies. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 1945–1957. [Google Scholar] [CrossRef]
  38. Lim, K.Y.; Kligerman, S.J.; Lin, C.T.; White, C.S. Missed Pulmonary Embolism on Abdominal CT. Am. J. Roentgenol. 2014, 202, 738–743. [Google Scholar] [CrossRef]
  39. Winkel, D.J.; Heye, T.; Weikert, T.J.; Boll, D.T.; Stieltjes, B. Evaluation of an AI-Based Detection Software for Acute Findings in Abdominal Computed Tomography Scans: Toward an Automated Work List Prioritization of Routine CT Examinations. Investig. Radiol. 2019, 54, 55. [Google Scholar] [CrossRef] [PubMed]
  40. Abel, L.; Wasserthal, J.; Weikert, T.; Sauter, A.W.; Nesic, I.; Obradovic, M.; Yang, S.; Manneck, S.; Glessgen, C.; Ospel, J.M.; et al. Automated Detection of Pancreatic Cystic Lesions on CT Using Deep Learning. Diagnostics 2021, 11, 901. [Google Scholar] [CrossRef] [PubMed]
  41. Burns, J.E.; Yao, J.; Muñoz, H.; Summers, R.M. Automated Detection, Localization, and Classification of Traumatic Vertebral Body Fractures in the Thoracic and Lumbar Spine at CT. Radiology 2016, 278, 64–73. [Google Scholar] [CrossRef] [PubMed]
  42. Rueckel, J.; Sperl, J.I.; Kaestle, S.; Hoppe, B.F.; Fink, N.; Rudolph, J.; Schwarze, V.; Geyer, T.; Strobl, F.F.; Ricke, J.; et al. Reduction of Missed Thoracic Findings in Emergency Whole-Body Computed Tomography Using Artificial Intelligence Assistance. Quant. Imaging Med. Surg. 2021, 11, 2486–2498. [Google Scholar] [CrossRef] [PubMed]
  43. Mongan, J.; Kohli, M.D.; Houshyar, R.; Chang, P.D.; Glavis-Bloom, J.; Taylor, A.G. Automated Detection of IVC Filters on Radiographs with Deep Convolutional Neural Networks. Abdom. Radiol. 2023, 48, 758–764. [Google Scholar] [CrossRef]
  44. Ta, C.N.; Kono, Y.; Eghtedari, M.; Oh, Y.T.; Robbin, M.L.; Barr, R.G.; Kummel, A.C.; Mattrey, R.F. Focal Liver Lesions: Computer-Aided Diagnosis by Using Contrast-Enhanced US Cine Recordings. Radiology 2018, 286, 1062–1071. [Google Scholar] [CrossRef]
  45. Li, J.; Wu, Y.; Shen, N.; Zhang, J.; Chen, E.; Sun, J.; Deng, Z.; Zhang, Y. A Fully Automatic Computer-Aided Diagnosis System for Hepatocellular Carcinoma Using Convolutional Neural Networks. Biocybern. Biomed. Eng. 2020, 40, 238–248. [Google Scholar] [CrossRef]
  46. Zhou, J.; Wang, W.; Lei, B.; Ge, W.; Huang, Y.; Zhang, L.; Yan, Y.; Zhou, D.; Ding, Y.; Wu, J.; et al. Automatic Detection and Classification of Focal Liver Lesions Based on Deep Convolutional Neural Networks: A Preliminary Study. Front. Oncol. 2021, 10, 581210. [Google Scholar] [CrossRef]
  47. Ponnoprat, D.; Inkeaw, P.; Chaijaruwanich, J.; Traisathit, P.; Sripan, P.; Inmutto, N.; Na Chiangmai, W.; Pongnikorn, D.; Chitapanarux, I. Classification of Hepatocellular Carcinoma and Intrahepatic Cholangiocarcinoma Based on Multi-Phase CT Scans. Med. Biol. Eng. Comput. 2020, 58, 2497–2515. [Google Scholar] [CrossRef]
  48. Brunese, M.C.; Fantozzi, M.R.; Fusco, R.; De Muzio, F.; Gabelloni, M.; Danti, G.; Borgheresi, A.; Palumbo, P.; Bruno, F.; Gandolfo, N.; et al. Update on the Applications of Radiomics in Diagnosis, Staging, and Recurrence of Intrahepatic Cholangiocarcinoma. Diagnostics 2023, 13, 1488. [Google Scholar] [CrossRef]
  49. Yin, Y.; Yakar, D.; Slangen, J.J.G.; Hoogwater, F.J.H.; Kwee, T.C.; de Haas, R.J. The Value of Deep Learning in Gallbladder Lesion Characterization. Diagnostics 2023, 13, 704. [Google Scholar] [CrossRef]
  50. Nikpanah, M.; Xu, Z.; Jin, D.; Farhadi, F.; Saboury, B.; Ball, M.W.; Gautam, R.; Merino, M.J.; Wood, B.J.; Turkbey, B.; et al. A Deep-Learning Based Artificial Intelligence (AI) Approach for Differentiation of Clear Cell Renal Cell Carcinoma from Oncocytoma on Multi-Phasic MRI. Clin. Imaging 2021, 77, 291–298. [Google Scholar] [CrossRef]
  51. Barat, M.; Cottereau, A.-S.; Gaujoux, S.; Tenenbaum, F.; Sibony, M.; Bertherat, J.; Libé, R.; Gaillard, M.; Jouinot, A.; Assié, G.; et al. Adrenal Mass Characterization in the Era of Quantitative Imaging: State of the Art. Cancers 2022, 14, 569. [Google Scholar] [CrossRef]
  52. Park, H.; Qin, L.; Guerra, P.; Bay, C.P.; Shinagare, A.B. Decoding Incidental Ovarian Lesions: Use of Texture Analysis and Machine Learning for Characterization and Detection of Malignancy. Abdom. Radiol. 2021, 46, 2376–2383. [Google Scholar] [CrossRef]
  53. Giannini, V.; Mazzetti, S.; Defeudis, A.; Stranieri, G.; Calandri, M.; Bollito, E.; Bosco, M.; Porpiglia, F.; Manfredi, M.; De Pascale, A.; et al. A Fully Automatic Artificial Intelligence System Able to Detect and Characterize Prostate Cancer Using Multiparametric MRI: Multicenter and Multi-Scanner Validation. Front. Oncol. 2021, 11, 718155. [Google Scholar] [CrossRef]
  54. Woźnicki, P.; Westhoff, N.; Huber, T.; Riffel, P.; Froelich, M.F.; Gresser, E.; von Hardenberg, J.; Mühlberg, A.; Michel, M.S.; Schoenberg, S.O.; et al. Multiparametric MRI for Prostate Cancer Characterization: Combined Use of Radiomics Model with PI-RADS and Clinical Parameters. Cancers 2020, 12, 1767. [Google Scholar] [CrossRef]
  55. Pickhardt, P.J.; Graffy, P.M.; Zea, R.; Lee, S.J.; Liu, J.; Sandfort, V.; Summers, R.M. Automated CT Biomarkers for Opportunistic Prediction of Future Cardiovascular Events and Mortality in an Asymptomatic Screening Population: A Retrospective Cohort Study. Lancet Digit. Health 2020, 2, e192–e200. [Google Scholar] [CrossRef]
  56. Abraham, T.M.; Pedley, A.; Massaro, J.M.; Hoffmann, U.; Fox, C.S. Association between Visceral and Subcutaneous Adipose Depots and Incident Cardiovascular Disease Risk Factors. Circulation 2015, 132, 1639–1647. [Google Scholar] [CrossRef] [PubMed]
  57. Sebro, R.; De la Garza-Ramos, C. Opportunistic Screening for Osteoporosis and Osteopenia from CT Scans of the Abdomen and Pelvis Using Machine Learning. Eur. Radiol. 2023, 33, 1812–1823. [Google Scholar] [CrossRef] [PubMed]
  58. Pickhardt, P.J. Value-Added Opportunistic CT Screening: State of the Art. Radiology 2022, 303, 241–254. [Google Scholar] [CrossRef] [PubMed]
  59. Lawrence, E.M.; Pooler, B.D.; Pickhardt, P.J. Opportunistic Screening for Hereditary Hemochromatosis with Unenhanced CT: Determination of an Optimal Liver Attenuation Threshold. AJR Am. J. Roentgenol. 2018, 211, 1206–1211. [Google Scholar] [CrossRef] [PubMed]
  60. Graffy, P.M.; Sandfort, V.; Summers, R.M.; Pickhardt, P.J. Automated Liver Fat Quantification at Nonenhanced Abdominal CT for Population-Based Steatosis Assessment. Radiology 2019, 293, 334–342. [Google Scholar] [CrossRef]
  61. Deng, C.-Y.; Lin, Y.-C.; Wu, J.S.; Cheung, Y.-C.; Fan, C.-W.; Yeh, K.-Y.; McMahon, C.J. Progressive Sarcopenia in Patients with Colorectal Cancer Predicts Survival. AJR Am. J. Roentgenol. 2018, 210, 526–532. [Google Scholar] [CrossRef]
  62. Bedrikovetski, S.; Seow, W.; Kroon, H.M.; Traeger, L.; Moore, J.W.; Sammour, T. Artificial Intelligence for Body Composition and Sarcopenia Evaluation on Computed Tomography: A Systematic Review and Meta-Analysis. Eur. J. Radiol. 2022, 149, 110218. [Google Scholar] [CrossRef]
  63. van Timmeren, J.E.; Cester, D.; Tanadini-Lang, S.; Alkadhi, H.; Baessler, B. Radiomics in Medical Imaging-“how-to” Guide and Critical Reflection. Insights Imaging 2020, 11, 91. [Google Scholar] [CrossRef] [PubMed]
  64. Rizzo, S.; Botta, F.; Raimondi, S.; Origgi, D.; Fanciullo, C.; Morganti, A.G.; Bellomi, M. Radiomics: The Facts and the Challenges of Image Analysis. Eur. Radiol. Exp. 2018, 2, 36. [Google Scholar] [CrossRef]
  65. Yao, S.; Ye, Z.; Wei, Y.; Jiang, H.-Y.; Song, B. Radiomics in Hepatocellular Carcinoma: A State-of-the-Art Review. World J. Gastrointest. Oncol. 2021, 13, 1599–1615. [Google Scholar] [CrossRef] [PubMed]
  66. Mao, B.; Zhang, L.; Ning, P.; Ding, F.; Wu, F.; Lu, G.; Geng, Y.; Ma, J. Preoperative Prediction for Pathological Grade of Hepatocellular Carcinoma via Machine Learning-Based Radiomics. Eur. Radiol. 2020, 30, 6924–6932. [Google Scholar] [CrossRef]
  67. Fu, S.; Wei, J.; Zhang, J.; Dong, D.; Song, J.; Li, Y.; Duan, C.; Zhang, S.; Li, X.; Gu, D.; et al. Selection Between Liver Resection Versus Transarterial Chemoembolization in Hepatocellular Carcinoma: A Multicenter Study. Clin. Transl. Gastroenterol. 2019, 10, e00070. [Google Scholar] [CrossRef] [PubMed]
  68. Wang, X.-H.; Long, L.-H.; Cui, Y.; Jia, A.Y.; Zhu, X.-G.; Wang, H.-Z.; Wang, Z.; Zhan, C.-M.; Wang, Z.-H.; Wang, W.-H. MRI-Based Radiomics Model for Preoperative Prediction of 5-Year Survival in Patients with Hepatocellular Carcinoma. Br. J. Cancer 2020, 122, 978–985. [Google Scholar] [CrossRef] [PubMed]
  69. Horvat, N.; Veeraraghavan, H.; Khan, M.; Blazic, I.; Zheng, J.; Capanu, M.; Sala, E.; Garcia-Aguilar, J.; Gollub, M.J.; Petkovska, I. MR Imaging of Rectal Cancer: Radiomics Analysis to Assess Treatment Response after Neoadjuvant Therapy. Radiology 2018, 287, 833–843. [Google Scholar] [CrossRef] [PubMed]
  70. Bhandari, A.; Ibrahim, M.; Sharma, C.; Liong, R.; Gustafson, S.; Prior, M. CT-Based Radiomics for Differentiating Renal Tumours: A Systematic Review. Abdom. Radiol. 2021, 46, 2052–2063. [Google Scholar] [CrossRef] [PubMed]
  71. Mukherjee, S.; Patra, A.; Khasawneh, H.; Korfiatis, P.; Rajamohan, N.; Suman, G.; Majumder, S.; Panda, A.; Johnson, M.P.; Larson, N.B.; et al. Radiomics-Based Machine-Learning Models Can Detect Pancreatic Cancer on Prediagnostic Computed Tomography Scans at a Substantial Lead Time Before Clinical Diagnosis. Gastroenterology 2022, 163, 1435–1446.e3. [Google Scholar] [CrossRef]
  72. Pinker, K.; Shitano, F.; Sala, E.; Do, R.K.; Young, R.J.; Wibmer, A.G.; Hricak, H.; Sutton, E.J.; Morris, E.A. Background, Current Role, and Potential Applications of Radiogenomics. J. Magn. Reson. Imaging 2018, 47, 604–620. [Google Scholar] [CrossRef]
  73. Horvat, N.; Bates, D.D.B.; Petkovska, I. Novel Imaging Techniques of Rectal Cancer: What Do Radiomics and Radiogenomics Have to Offer? A Literature Review. Abdom. Radiol. 2019, 44, 3764–3774. [Google Scholar] [CrossRef]
  74. Mariani, P.; Lae, M.; Degeorges, A.; Cacheux, W.; Lappartient, E.; Margogne, A.; Pierga, J.-Y.; Girre, V.; Mignot, L.; Falcou, M.C.; et al. Concordant Analysis of KRAS Status in Primary Colon Carcinoma and Matched Metastasis. Anticancer Res. 2010, 30, 4229–4235. [Google Scholar]
  75. Lubner, M.G.; Stabo, N.; Lubner, S.J.; del Rio, A.M.; Song, C.; Halberg, R.B.; Pickhardt, P.J. CT Textural Analysis of Hepatic Metastatic Colorectal Cancer: Pre-Treatment Tumor Heterogeneity Correlates with Pathology and Clinical Outcomes. Abdom. Imaging 2015, 40, 2331–2337. [Google Scholar] [CrossRef]
  76. Shin, J.; Seo, N.; Baek, S.-E.; Son, N.-H.; Lim, J.S.; Kim, N.K.; Koom, W.S.; Kim, S. MRI Radiomics Model Predicts Pathologic Complete Response of Rectal Cancer Following Chemoradiotherapy. Radiology 2022, 303, 351–358. [Google Scholar] [CrossRef] [PubMed]
  77. Bian, Y.; Zheng, Z.; Fang, X.; Jiang, H.; Zhu, M.; Yu, J.; Zhao, H.; Zhang, L.; Yao, J.; Lu, L.; et al. Artificial Intelligence to Predict Lymph Node Metastasis at CT in Pancreatic Ductal Adenocarcinoma. Radiology 2022, 306, 220329. [Google Scholar] [CrossRef]
  78. United States Food and Drug Administration. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)—Discussion Paper and Request for Feedback 2019. Available online: https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf (accessed on 25 July 2023).
  79. United States Food and Drug Administration. Artificial Intelligence and Machine Learning in Software as a Medical Device—Action Plan. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device (accessed on 15 February 2023).
  80. United States Food and Drug Administration. Premarket Notification 510(k). Available online: https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/premarket-notification-510k (accessed on 15 February 2023).
  81. United States Food and Drug Administration. Premarket Approval (PMA). Available online: https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/premarket-approval-pma (accessed on 15 February 2023).
  82. United States Food and Drug Administration. De Novo Classification Request. Available online: https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/de-novo-classification-request (accessed on 15 February 2023).
  83. United States Food and Drug Administration. Global Approach to Software as a Medical Device 2022. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/global-approach-software-medical-device (accessed on 25 July 2023).
  84. International Medical Device Regulators Forum. “Software as a Medical Device”: Possible Framework for Risk Categorization and Corresponding Considerations 2014. Available online: https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-140918-samd-framework-risk-categorization-141013.pdf (accessed on 25 July 2023).
  85. Kim, D.W.; Jang, H.Y.; Kim, K.W.; Shin, Y.; Park, S.H. Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers. Korean J. Radiol. 2019, 20, 405. [Google Scholar] [CrossRef]
  86. Zech, J.R.; Badgeley, M.A.; Liu, M.; Costa, A.B.; Titano, J.J.; Oermann, E.K. Variable Generalization Performance of a Deep Learning Model to Detect Pneumonia in Chest Radiographs: A Cross-Sectional Study. PLoS Med. 2018, 15, e1002683. [Google Scholar] [CrossRef] [PubMed]
  87. Gichoya, J.W.; Banerjee, I.; Bhimireddy, A.R.; Burns, J.L.; Celi, L.A.; Chen, L.-C.; Correa, R.; Dullerud, N.; Ghassemi, M.; Huang, S.-C.; et al. AI Recognition of Patient Race in Medical Imaging: A Modelling Study. Lancet Digit. Health 2022, 4, e406–e414. [Google Scholar] [CrossRef]
  88. Jones-Jang, S.M.; Park, Y.J. How Do People React to AI Failure? Automation Bias, Algorithmic Aversion, and Perceived Controllability. J. Comput.-Mediat. Commun. 2022, 28, zmac029. [Google Scholar] [CrossRef]
  89. Strauß, S. Deep Automation Bias: How to Tackle a Wicked Problem of AI? BDCC 2021, 5, 18. [Google Scholar] [CrossRef]
  90. Gillespie, N.; Lockey, S.; Curtis, C. Trust in Artificial Intelligence: A Five Country Study; The University of Queensland and KPMG: Brisbane, Australia, 2021. [Google Scholar]
  91. Coppola, F.; Faggioni, L.; Gabelloni, M.; De Vietro, F.; Mendola, V.; Cattabriga, A.; Cocozza, M.A.; Vara, G.; Piccinino, A.; Lo Monaco, S.; et al. Human, All Too Human? An All-Around Appraisal of the ‘Artificial Intelligence Revolution’ in Medical Imaging. Front. Psychol. 2021, 12, 710982. [Google Scholar] [CrossRef] [PubMed]
  92. Finlayson, S.G.; Subbaswamy, A.; Singh, K.; Bowers, J.; Kupke, A.; Zittrain, J.; Kohane, I.S.; Saria, S. The Clinician and Dataset Shift in Artificial Intelligence. N. Engl. J. Med. 2021, 385, 283–286. [Google Scholar] [CrossRef] [PubMed]
  93. Available online: https://gdpr.eu/what-is-gdpr/ (accessed on 23 August 2023).
  94. Tadavarthi, Y.; Vey, B.; Krupinski, E.; Prater, A.; Gichoya, J.; Safdar, N.; Trivedi, H. The State of Radiology AI: Considerations for Purchase Decisions and Current Market Offerings. Radiol. Artif. Intell. 2020, 2, e200004. [Google Scholar] [CrossRef] [PubMed]
  95. Chen, M.M.; Golding, L.P.; Nicola, G.N. Who Will Pay for AI? Radiol. Artif. Intell. 2021, 3, e210030. [Google Scholar] [CrossRef] [PubMed]
  96. Chu, L.C.; Anandkumar, A.; Shin, H.C.; Fishman, E.K. The Potential Dangers of Artificial Intelligence for Radiology and Radiologists. J. Am. Coll. Radiol. 2020, 17, 1309. [Google Scholar] [CrossRef] [PubMed]
  97. Uyumazturk, B.; Kiani, A.; Rajpurkar, P.; Wang, A.; Ball, R.L.; Gao, R.; Yu, Y.; Jones, E.; Langlotz, C.P.; Martin, B.; et al. Deep Learning for the Digital Pathologic Diagnosis of Cholangiocarcinoma and Hepatocellular Carcinoma: Evaluating the Impact of a Web-Based Diagnostic Assistant. arXiv 2019, arXiv:1911.07372. [Google Scholar]
  98. Jungmann, F.; Jorg, T.; Hahn, F.; Pinto dos Santos, D.; Jungmann, S.M.; Düber, C.; Mildenberger, P.; Kloeckner, R. Attitudes Toward Artificial Intelligence Among Radiologists, IT Specialists, and Industry. Acad. Radiol. 2021, 28, 834–840. [Google Scholar] [CrossRef] [PubMed]
  99. Geis, J.R.; Brady, A.P.; Wu, C.C.; Spencer, J.; Ranschaert, E.; Jaremko, J.L.; Langer, S.G.; Borondy Kitts, A.; Birch, J.; Shields, W.F.; et al. Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement. Radiology 2019, 293, 436–440. [Google Scholar] [CrossRef]
  100. Mittelstadt, B.D.; Floridi, L. The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts. In The Ethics of Biomedical Big Data; Mittelstadt, B.D., Floridi, L., Eds.; Law, Governance and Technology Series; Springer International Publishing: Cham, Switzerland, 2016; pp. 445–480. ISBN 978-3-319-33525-4. [Google Scholar]
  101. Mirsky, Y.; Mahler, T.; Shelef, I.; Elovici, Y. CT-GAN: Malicious Tampering of 3D Medical Imagery Using Deep Learning. In Proceedings of the 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, USA, 14–16 August 2019. [Google Scholar]
  102. Finlayson, S.G.; Chung, H.W.; Kohane, I.S.; Beam, A.L. Adversarial Attacks against Medical Deep Learning Systems. arXiv 2018, arXiv:1804.05296. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mervak, B.M.; Fried, J.G.; Wasnik, A.P. A Review of the Clinical Applications of Artificial Intelligence in Abdominal Imaging. Diagnostics 2023, 13, 2889. https://doi.org/10.3390/diagnostics13182889

AMA Style

Mervak BM, Fried JG, Wasnik AP. A Review of the Clinical Applications of Artificial Intelligence in Abdominal Imaging. Diagnostics. 2023; 13(18):2889. https://doi.org/10.3390/diagnostics13182889

Chicago/Turabian Style

Mervak, Benjamin M., Jessica G. Fried, and Ashish P. Wasnik. 2023. "A Review of the Clinical Applications of Artificial Intelligence in Abdominal Imaging" Diagnostics 13, no. 18: 2889. https://doi.org/10.3390/diagnostics13182889

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop