Next Article in Journal
Pulmonary Valve Fibroelastoma, Still a Very Rare Cardiac Tumor: Case Report and Literature Review
Previous Article in Journal
Diagnostic Challenge of Localized Tenosynovial Giant Cell Tumor in Children
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence-Empowered Radiology—Current Status and Critical Review

1
Department of Diagnostic Imaging, Jagiellonian University Medical College, 30-663 Krakow, Poland
2
Faculty of Geology, Geophysics and Environmental Protection, AGH University of Krakow, 30-059 Krakow, Poland
3
Department of Measurement and Electronics, AGH University of Krakow, 30-059 Krakow, Poland
4
Department of Biocybernetics and Biomedical Engineering, AGH University of Krakow, 30-059 Krakow, Poland
5
Institute of Electronics, Lodz University of Technology, 93-590 Lodz, Poland
6
Department of Algorithmics and Software, Silesian University of Technology, 44-100 Gliwice, Poland
*
Author to whom correspondence should be addressed.
Diagnostics 2025, 15(3), 282; https://doi.org/10.3390/diagnostics15030282
Submission received: 12 December 2024 / Revised: 12 January 2025 / Accepted: 23 January 2025 / Published: 24 January 2025
(This article belongs to the Topic AI in Medical Imaging and Image Processing)

Abstract

:
Humanity stands at a pivotal moment of technological revolution, with artificial intelligence (AI) reshaping fields traditionally reliant on human cognitive abilities. This transition, driven by advancements in artificial neural networks, has transformed data processing and evaluation, creating opportunities for addressing complex and time-consuming tasks with AI solutions. Convolutional networks (CNNs) and the adoption of GPU technology have already revolutionized image recognition by enhancing computational efficiency and accuracy. In radiology, AI applications are particularly valuable for tasks involving pattern detection and classification; for example, AI tools have enhanced diagnostic accuracy and efficiency in detecting abnormalities across imaging modalities through automated feature extraction. Our analysis reveals that neuroimaging and chest imaging, as well as CT and MRI modalities, are the primary focus areas for AI products, reflecting their high clinical demand and complexity. AI tools are also used to target high-prevalence diseases, such as lung cancer, stroke, and breast cancer, underscoring AI’s alignment with impactful diagnostic needs. The regulatory landscape is a critical factor in AI product development, with the majority of products certified under the Medical Device Directive (MDD) and Medical Device Regulation (MDR) in Class IIa or Class I categories, indicating compliance with moderate-risk standards. A rapid increase in AI product development from 2017 to 2020, peaking in 2020 and followed by recent stabilization and saturation, was identified. In this work, the authors review the advancements in AI-based imaging applications, underscoring AI’s transformative potential for enhanced diagnostic support and focusing on the critical role of CNNs, regulatory challenges, and potential threats to human labor in the field of diagnostic imaging.

1. Introduction

Humanity currently stands at the threshold of a technological revolution, in which the application of decision-making systems based on microprocessors represents a transformation on par with the invention of the wheel or the harnessing of fire [1,2,3]. The ability to transfer decision-making processes concerning the interpretation and outcomes of human-related data—traditionally reserved for the human brain—to artificial neural network systems marks a profound shift [4,5,6], representing a significant breakthrough in data processing and evaluation methods [7].
Decision-making in semiconductor-based neural networks is, to some extent, determined by humans as the problem-solving approach relies on patterns within the training data [8,9]. It is worth noting that deeper, multilayered networks have the capacity to develop autonomous decision pathways, gradually constructed based on their own iterative experiences through exposure to input data [10,11]; this process can be opaque, often resulting in solutions that surpass human intuition, as illustrated in the notable example of AI algorithms outperforming humans in the game of Go [12]. Given these capabilities, AI not only offers solutions to repetitive and time-intensive tasks but also presents novel approaches to long-standing problems, a potential that explains AI’s extensive applications in medicine [13].
AI applications are apparent in fields such as materials biology, biochemistry, and genetics where multidimensional data analysis leads to breakthroughs in creating unique structures and compounds valuable for biomaterials and pharmaceuticals [14]. AI’s utility is particularly evident in diagnostic imaging where physicians engage in the detection and classification of patterns; these tasks are time-consuming for specialized medical professionals and, due to the repetitive nature of pattern recognition, are prime candidates for AI-powered decision systems [15,16]. Consequently, developing such systems has become a priority within the scientific community, as reflected in the increasing number of algorithms aimed at detecting various anomalies in medical images [17,18]; these algorithms primarily rely on convolutional neural network architectures trained on specific image patterns [19].
The vast diversity in data encountered in modern imaging diagnostics presents substantial challenges. Algorithms must be tailored to specific tissues, organs, and imaging modalities, such as computed tomography, magnetic resonance imaging, mammography, and X-ray; this diversity is reflected in the wide array of algorithms designed to analyze various regions of the human body [20,21]. Depending on an algorithm’s validated effectiveness and integration capabilities with radiological systems, it may or may not receive endorsements from certification bodies [22,23].
In this review, the authors present an overview of existing algorithms and outline the most critical techniques used in the classification of medical imaging data, subdivided in this article into the following sections: historical perspective, deep learning solution overview, human vs. machine interaction consideration, job risk overview, and current medical solutions review.
We aim to provide a comprehensive analysis of the advancements and challenges associated with AI applications in radiology, specifically examining the evolution of convolutional neural networks (CNNs) as foundational tools in diagnostic imaging, evaluating the regulatory frameworks influencing AI integration into clinical workflows, and exploring the potential impacts of these technologies on radiologists’ roles and healthcare delivery. By synthesizing the current state of AI in radiology in this work, we seek to bridge existing knowledge gaps, propose directions for future research, and offer actionable insights for stakeholders in the medical imaging field.
This manuscript is organized as follows: In Section 2, we provide a historical background, highlighting key milestones in AI development and their impacts on medical imaging. In Section 3, we delve into contemporary deep learning models, discussing their architecture, applications, and limitations. In Section 4, we explore the comparative efficiency of AI and human radiologists, shedding light on their complementary roles in diagnostic workflows. In Section 5, we address concerns regarding the potential displacement of radiologists and emphasize the importance of balanced human–AI collaboration. In Section 6, we examine the critical role of data preparation in enhancing AI model reliability; then, in Section 7, we introduce the significance of textural analysis in medical imaging. In Section 8 and Section 9, we discuss emerging trends, challenges, and the current market landscape for AI in radiology. Finally, we conclude the manuscript with a summary of our findings and recommendations for future research.

2. Historical Background

The success of AI in image recognition is primarily driven by advancements in convolutional neural networks (CNNs), traced back to the application of GPU technology and modifications to the dropout technique, first implemented in 2012, which helped secure a win in the ImageNet contest. The ImageNet contest was conceived as a benchmark for evaluating the most effective approaches to image recognition, initially comprising over 10 million images across more than 22,000 categories. The breakthrough, in 2012, by Geoffrey Hinton’s students, Ilya Sutskever and Alex Krizhevsky, marked a pivotal moment in computer vision and image recognition when these researchers, initially working in speech recognition, achieved significantly better results than other competitors using two NVIDIA GTX 580 GPUs (NVIDIA Corporation, Santa Clara, CA, USA), each with 3 GB memory, to train a model with 60 million parameters over 90 epochs, completing the process in two days and presenting significantly reduced error rates in comparison to their competitors [24,25].
The success of the above approach catalyzed the broad adoption of GPUs in deep learning, a shift that represented a turning point in computer vision, in which GPUs demonstrated their efficiency in increasing the effectiveness of computer-driven image recognition. The next significant advancement, inspired by AlexNet, was ResNet, which won the 2015 ImageNet competition [26,27]; ResNet’s incorporation of residual connections (i.e., calculating output–input differences) enhanced network performance, solidifying CNNs as fundamental architecture in AI-driven image analysis [28]. The developments that followed highlighted the capabilities of CNNs and the power of GPU technology [29,30,31]. Building on these achievements, the first successful radiology-directed applications emerged, such as deep learning for lung nodule detection in the 2016 LUNA (Lung Nodule Analysis) challenge [32], which served as an early benchmark for the performance of deep learning algorithms in identifying lung nodules, thereby demonstrating the potential of AI in radiology. The introduction of deep learning models, inspired by AlexNet and encouraged in such competitions, paved the way for subsequent programs that rely on object detection within medical imaging, showcasing AI’s potential in advancing radiological diagnostics and opening new, transformative possibilities for efficient image analysis in radiology, enabling the recognition of features in radiographs, CT scans, and MRI images; these breakthroughs enabled automated feature extraction and, if broadly used, may substantially improve the accuracy of image analysis in diagnostic imaging. This progress has led to the development of AI tools that could support radiologists by providing second opinions, identifying anomalies, and even predicting disease [33].

3. Deep Learning Models: A Short Introduction to Current Solutions

In radiology, several data analysis aspects should be considered, and some current deep learning solutions are depicted in Figure 1. First, a visual inspection of the myriad imagery data generated with equipment allows for insight into the human body structure. Here, the deep learning models are applied to solve the classification task between healthy and unhealthy patients, sometimes considering, more precisely, the severity stages of an illness; on the other hand, such general information might need to be more comprehensive as the determination of the region of changes, or simply segmenting the data for further analysis, is expected, in which case, convolutional neural networks are considered. However, the problem is so complex in several cases that training only one model is insufficient, and a pipeline of various models is prepared to achieve the goal.

3.1. Classification

When visually analyzing radiographs (X-ray), computed tomography (CT), magnetic resonance imaging (MRI), or ultrasonography scans, convolutional neural networks (CNNs) are considered. ResNet [34] is one such CNN that has gained a lot of popularity; for instance, the smallest version, ResNet-18, used for histopathological images, CT/MRI scans, and genomic data, supports prognosis in clear-cell renal cell carcinoma [35], while the largest version, ResNet-152, was applied to detect pneumonia in chest X-rays [36] and constituted part of a model dedicated to diagnosing and predicting outcomes of COVID-19 pneumonia [37]. A modified ResNet architecture was exploited for attenuation correction in pelvic PET/MR images, significantly reducing voxel-based errors and improving the quantification of bone lesions compared to other existing methods [38]; meanwhile, the 3D version of this architecture helps predict the imaging characteristics, malignancy, and pathological subtypes of pulmonary nodules detected in CT scans [39]. Of course, other architectures also find application; for example, Albiol et al. [40] compared the outcomes of ResNet-50 [34], DenseNet-121 [41], Inception_v3 [42], and Inception-Resnet_v2 [43] against radiologists’ interpretations of chest radiographs for the early detection of COVID-19, proving that the deep learning approach overcomes the experts, reaching an AUC of 0.85, compared with 0.71 in the reference group. Additionally, Fink et al. [44] showed that using Xception architecture for musculoskeletal radiographs through projection and body-side resulted in an accuracy of 0.97 in classification tasks.
Not all tasks demand large and pre-trained models; some models achieve better results with small designs but dedicated architectures. Arbabshirani et al. [45] used tailored network architecture to detect intracranial hemorrhage in head CT scans. Nguyen [46] deployed a model for accurately measuring 12 spinal alignment parameters from X-ray images, addressing the manual challenges of spinal misalignment assessment. Solak et al. [47] designed capsule networks, comprising capsules grouping neurons that collectively represent a visual entity, transforming the visual information into more representative features (e.g., height, width) and employing it to classify adrenal lesions in MR images with 0.98 accuracy.

3.2. Segmentation

Beyond the classification of a whole image, it is also possible to determine the detailed region where an organ or tissue of interest is depicted within an image; such a problem in computer science is called semantic segmentation, in which a mask is generated for the whole image, and each pixel is classified as belonging to some data class, consequently delineating specific regions. The encoder–decoder architecture in the U-Net structure is the most widely used for this task; for example, Gasulla et al. [48] used this approach to create a system to assess lung condition severity during the COVID-19 pandemic. Many other approaches concentrate on the segmentation of abdominal organs or the determination of body composition, in the context of muscle and fat distribution, as mentioned by Santhanam et al. [49] in their review. However, such solutions significantly impact medical imaging analysis as there are still imperfections in AI solutions when compared to human ones, as reported by Willemink et al. [50] who also claimed that applying transformer networks [51] should bring about improvements.
Although deep models have proven to be of high quality, more than one model is required in complex tasks as one usually cannot work well to meet several constraints simultaneously; therefore, in such cases, the entire process combines applying various models to achieve a goal. Larson et al. [52] designed a system composed of FastRCNN, based on ResNet-101, for landmark detection, followed by an EfficientNet-D0 model trained to recognize the presence or absence of hardware within extracted joint image patches; this system showed 99% correlation with radiologist measurements of leg length. Nurzynska et al. [53] created a pipeline, composed of two Inception_v3 networks, to determine whether the data were positive or negative for acid-fast (AF) mycobacteria; the first network removed the background, while the second network created a heatmap corresponding to the probability of AF presence. A sequence of three models was applied by Haji Maghsoudi [54] to discriminate breast cancer, in which the first model was responsible for removing the background, the second for finding and removing the pectoralis muscle—with both models based on U-Net [55] architecture—while the last one segmented dense tissue and classified it with the ResNet encoder.
As one can see, a combination of models is necessary when the determination of the main objectives requires the need to learn a system of changes or analyses; thus, usually, the first stages are responsible for removing unnecessary information, allowing the following models to concentrate more effectively on the main task.
Although the model architecture proves the solution’s success, it is not the only possibility, nor always the most optimal solution. The training algorithm chosen, especially when labeled data are absent or insufficient [53], as well as the optimization algorithm used while training the network, may influence the final outcome [56].

3.3. Report Generation

The application of large language models (LLM) has recently found its place in the automation of radiologist imagery description analysis. Additionally, AI technology is mature enough, and the amount of gathered data is sufficient to prepare models that automatically generate radiological image descriptions, but this does not place radiologists out of work as AI systems cannot provide a diagnosis. AI should, however, facilitate radiologists’ work with automatically generated image descriptions that they should verify and improve in detail; such an approach may improve early diagnosis and limit the radiologist’s workload [57]. For example, Zhang et al. [58] developed a generative model to automate the generation of radiological reports from chest X-rays. Blind assessments by radiologists indicated that the generated reports were comparable in quality to those produced by human experts. Another approach was presented by PRAS Bassi et al. [59] who introduced RadGPT, an anatomy-aware vision-language AI agent designed to generate detailed reports from CT scans. RadGPT segments tumors, including benign cysts and malignant growths, along with surrounding anatomical structures, transforming this information into both structured and narrative reports. These reports provide comprehensive details on tumor size, shape, location, attenuation, volume, and interactions with adjacent blood vessels and organs.
Conversely, Sun et al. [60] evaluated GPT-4’s ability to generate the “Impressions” section of radiology reports from the “Findings” section, comparing its outputs to those of human radiologists. Radiologists rated human-generated impressions higher in coherence, comprehensiveness, and factual consistency and concluded that, despite GPT-4’s potential to assist in report generation, its outputs currently do not match the quality of human radiologists’ work.

3.4. Language Analysis

The functionality of LLMs in radiology is not restricted to generating reports but includes broader applications across the field. For instance, LLMs have shown potential in accurately identifying and classifying true and false laterality errors in radiology reports [61]. Enhancing patient understanding of their condition is crucial for improved outcomes, making the creation of more patient-friendly imaging reports an essential area for LLM implementation. As Butler et al. [62] demonstrated in the context of orthopedic radiology, AI-LLM may improve the readability of radiological reports across multiple imaging modalities. Moreover, LLMs can serve as effective tools for post hoc structured reporting in radiology, enabling significant time savings by automating the organization and structuring of radiological data for improved efficiency and accessibility [63]. LLMs have also been employed to classify unstructured radiology reports into standardized categories. For example, a feasibility study by Matute-González et al. developed LiverAI, a specialized large language model designed to automatically annotate free-text MRI reports with LI-RADS v2018 categories. The findings revealed that incorporating LiverAI into clinical processes could streamline workflows by reducing the radiologists’ workload by 45% while maintaining high diagnostic accuracy [64].
These advancements underscore the versatility of LLMs in radiology, highlighting their capacity to improve diagnostic accuracy, patient communication, and workflow efficiency across various applications.

4. Are Machines More Efficient than Human Doctors?

Radiology involves the interpretation and classification of medical images based on characteristic features; a radiologist is trained to distinguish between features considered pathological and normal, identifying abnormalities and grouping them under specific diagnostic categories [65,66]. The process of extracting common features from sets of known images represents a combination of knowledge and expertise, along with the innate pattern recognition skills of the imaging specialist. This ability to recognize shared features across random datasets can be described mathematically and is often viewed as a measure of cognitive intelligence as the capability to achieve specific goals based on these generalizations is a classic marker of system intelligence [67,68]. Machine learning has shown promise in replacing aspects of this process as machines can emulate intelligence by generalizing and identifying similarities across varied models with limited connections because the ability to find similarities across patterns is understood as a background of artificial intelligence [69]. Furthermore, human knowledge dissemination is inherently slow, limited by communication bottlenecks—dictation or writing proceeds at only a few bits per second, even with templates, which remains considerably slower than machine processing [70]. A significant challenge in radiology is the diversity in data, which vary not only among imaging modalities (e.g., CT, MRI, CR) but also within the same anatomical structures, which can convey differing information, thus requiring a deep understanding of imaging processes and extensive training [71]. Additionally, the vast range of pathologies is challenging to encode in numerical formats that are suitable for machine processing; however, recent advances show that machines with self-learning capabilities can efficiently extract and replicate patterns, demonstrating an impressive ability to recognize features and share learned patterns swiftly [72,73]—progress which has motivated various research groups to create specialized programs designed to extract specific pathological features. However, there is no comprehensive program that can accurately detect all pathological features in any given image (i.e., a generalized AI for pathology detection); currently, radiology relies on a suite of AI tools, each tailored to address a particular problem. Familiarity with these tools is essential for radiologists to work effectively, especially as the gap between available radiology professionals and the volume of medical images continues to grow. The widespread use of AI tools in radiology could enhance diagnostic accuracy, reduce error rates, and improve patient outcomes by standardizing processes and expediting diagnoses.
Starting with connections, radiologists, represented by the human brain, have approximately 80 trillion synaptic links, highlighting their complex neural network that enables deep adaptability and perceptual sensitivity. In contrast, AI models, with around 3 trillion parameters, have fewer connections, which points to their efficiency with structured data but reveals potential limitations in adaptability (Figure 2). Moving to data processing volume, radiologists handle a moderate amount of data in each session, reflecting their focus on quality over quantity and their ability to adapt to diverse cases [74].
However, AI models surpass radiologists in their ability to process large volumes of data quickly, which is advantageous for tasks requiring speed and consistency, but AI models may lack the nuanced adaptability seen in human analysis. When it comes to consistency, radiologists show moderate reliability, as their work is often subject to slight variability due to factors such as fatigue and cognitive biases; AI models, on the other hand, demonstrate high consistency, performing uniformly across datasets and maintaining precision without the effects of fatigue or bias [75].
The high consistency of AI models makes them suitable for tasks that require repetitive accuracy though they may not capture subtleties as effectively as humans. In terms of speed of analysis, radiologists typically exhibit a moderate pace, balancing thoroughness with the need for careful evaluation; their work involves a nuanced approach that can be time-consuming but essential for accuracy in complex cases. AI models, in contrast, excel in speed, rapidly processing images in high volumes, which proves useful for scenarios in which rapid diagnostics are critical though it often requires human validation for complex findings [76].
The above comparison highlights the complementary nature of radiologists and AI models: radiologists bring adaptability, perceptual depth, and expertise, while AI models contribute efficiency, speed, and consistent accuracy. Together, these strengths suggest that an integrated approach, combining human insight with AI efficiency, might offer the most comprehensive and effective path forward in medical imaging diagnostics.

5. Is the Job of a Radiologist at Risk?

Machines are often considered superior to humans in solving logical problems; however, in terms of perception—especially in complex medical image recognition—humans still outperform machines [77], and this remains true when we consider a radiologist’s ability to adapt to diverse datasets and detect various pathologies [78,79,80]. The following two primary factors determine the performance of both biological and artificial networks: first, the number and architecture of connections, which dictate the network’s ability to capture intricate patterns; and second, the volume and quality of data as better data typically yield better outcomes, a principle often summarized in the phrase “garbage in, garbage out” [81]. Additionally, there are other factors, unique to artificial networks, which are not directly comparable to human brain processes but significantly influence artificial network performance, including training techniques, optimization algorithms, hyperparameter tuning, model generalization, and transfer learning [82,83]. Biological systems, such as the human brain, contain an enormous number of connections—up to 80 trillion synaptic links—whereas large language models typically rely on around 3 trillion parameters [84,85,86] (Table 1), revealing a fundamental difference between the two: while humans operate with many connections and a relatively low data volume, machines possess vast data volumes but operate with relatively fewer connections. Machine learning models’ information extraction abilities are proportional to their capacity to identify similarities across datasets, including visual patterns, which can be measured with data compression tests [69,87]; on the other hand, foundation models trained on extensive datasets demonstrate high proficiency in detecting pathologies and, when well-trained, can outperform human observers in terms of accuracy, sensitivity, consistency, and speed [88], without suffering from fatigue or biases that commonly limit humans [89,90,91]. Currently, radiologists, equipped with broad and adaptable knowledge, remain the most proficient operators of relevant tools, which addresses the pressing question regarding the future of radiology [77,92,93]; the unique insights and adaptability of radiologists suggest that their expertise will remain vital in integrating AI tools into medical practice [94].

6. Importance of Data Preparation for Processing with AI

Preparing medical imaging data for machine learning (ML) requires a systematic approach to ensure model reliability, accuracy, and generalizability (Figure 3). First, clear project goals guide data preparation as classification, segmentation, and detection tasks each have unique requirements [95]. Standardization is essential, including converting images to a consistent format (e.g., DICOM or NIfTI) and normalizing resolution and intensity to account for differences across modalities such as X-ray, MRI, or CT [96]. Data cleaning addresses artifacts (e.g., motion blur) and ensures label accuracy, while quality assurance (QA) checks verify noise levels, contrast, and anatomical coverage. Accurate annotation by experienced radiologists is critical, with guidelines established to maintain consistency, especially for complex tasks such as segmentation. Data augmentation techniques (e.g., rotations, flips, contrast adjustments) expand limited datasets, increasing robustness. For data augmentation, techniques that are appropriate to the problem and logical should be used. It is crucial that data augmentation techniques align with the clinical context to avoid introducing unrealistic transformations. For example, while minor rotations (a few degrees left or right) can enhance the robustness of models, extreme rotations such as 90 degrees are inappropriate for X-ray imaging as they do not reflect how such images are typically analyzed by clinicians. After preparation, datasets should be split into training, validation, and test sets while preventing leakage by keeping patient images in only one subset. Balanced datasets are ideal, but, in smaller datasets, k-fold cross-validation can be helpful to maximize data use. Metadata management and privacy safeguards are essential, and patient information should be de-identified to comply with regulations (e.g., HIPAA or GDPR). Finally, pre-training checks, including basic statistical analysis and baseline model evaluation, help identify potential issues before training [97].

7. The Role of Textural Analysis in Image Preprocessing

Image texture is a crucial component of various image types, including medical images, as medical imaging modalities visualize the properties of internal organs and tissues through textural representation; for instance, the textures observed in tomographic cross-sections provide valuable diagnostic insights. Textural parameters are part of radiomics [98] and reflect the physiological characteristics of tissues, enabling applications such as organ segmentation, lesion detection, and the assessment of pathological changes. The importance of textural analysis in diagnostic imaging has been established across various modalities, including computed tomography (CT) [99], magnetic resonance imaging (MRI) [100], and ultrasound [101].
Furthermore, variations in acquisition parameters across different patient images often affect brightness and contrast in regions of interest (ROIs); these variations can occur between consecutive images, causing some textural features to depend not only on texture but also on factors such as average brightness and contrast. Consequently, features intended to describe tissue structure may inadvertently reflect scanner sensitivity inconsistencies within the analyzed region [102], potentially leading to inaccurate tissue characterization or misclassification.
In order to mitigate these issues, the normalization of ROIs is commonly applied, typically serving as an initial step and expanding the image histogram within the ROI to cover the entire available intensity range, a process that enhances the contrast between bright and dark structural elements within the texture and reduces the influence of local mean intensity, thereby improving the quality of extracted features. Different methods for ROI normalization, as well as their influence to the overall image analysis results, are presented in reference [103].
Texture feature maps also play important roles in the analysis of biomedical images. A feature map is an image of the distribution of values of a given textural parameter, determined for the entire image or its fragment; the value of a feature is determined for each image point in its neighborhood large enough to enable the correct characterization of the texture. A feature map allows us to observe how well a given feature distinguishes the analyzed textures; in the case of a properly selected parameter, a map of the feature transforms the textured areas into relatively uniform fragments differing in brightness (thereby encoding the values of the selected parameter).
The map—or, more often, maps—of features selected during the selection process are used as input data for the segmentation stage. An example feature map for an image of a brain cross-section, containing a fragment of a subarachnoid hemorrhage, is shown in Figure 4 where the map of the AngularSecondMoment feature (calculated based on the Gray Level Cooccurrence Matrix, GLCM) [104] contains the stroke area, which has a significantly different value than the rest of the brain; therefore, the image of this map allows us to isolate the stroke area using simple brightness thresholding.
Another example of using such maps for image segmentation is shown in Figure 5, determined for the SumAverage parameter from GLCM for a 15 × 15-pixel sliding window across the T1W MR image of a foot cross-section; bone regions are shown on this map as smooth areas with a small range of brightness levels, which allows them to be relatively easily distinguished from other regions. Texture feature maps can also be used for edge detection. Figure 5C,D shows such maps, obtained for the SumOfSquares (GLM) and Sigma (autoregressive model) features, respectively, in which the edges shown define bone tissue regions. An example of a texture map application combined with an active contour model for biomedical image segmentation is discussed in reference [105].

8. AI Is Supportive but Must Be Used with Caution

Although AI-based solutions have numerous applications in radiology, e.g., they represent state-of-the-art technology used for image segmentation [106,107], there are several applications for which they are suboptimal or should be used with caution.
Firstly, deep learning-based contributions to medical image registration are still less accurate than methods based on classical numerical optimization [108,109,110,111,112]. In an experiment classifying the patient sex using automated analysis of computed tomography scans of vertebrae, standard machine learning, used for the classification of textural features (classical approach), achieved an accuracy of 69%, while deep convolutional networks for this task yielded a slightly lower accuracy of 59% [113]. AI-based algorithms tend to have problems with generalizability to previously unseen cases, requiring further instance-level optimization in order to achieve optimal results; for example, in the large-scale Learn2Reg benchmark, all of the best-performing methods used classical optimization to further improve the learning-based results for both the brain MRI and abdominal CT tasks [108]. The same observations apply to the BraTS-Reg challenge [109], which was dedicated to comparing different algorithms against pre-operative and post-surgery registration of brain MRIs; all of the best-performing solutions used traditional numerical optimization and outperformed the AI-based contributions. Moreover, classical numerical optimization methods, such as PDE-constrained optimization, have been shown to be highly effective in medical image analysis, providing reliable and high-fidelity results compared to deep learning-based solutions, especially in applications requiring precise alignment, such as neuroimaging and cardiovascular imaging [114]. Therefore, while deep learning-based methods show promise, they still lag behind classical numerical optimization techniques in terms of accuracy and generalizability. Further research is needed to enhance the performance of AI-based algorithms and address their limitations [115].
Another area in which AI methods should be used with caution is image generation and reconstruction; nowadays, people attempt to use AI to generate new synthetic volumes for image augmentation [116] to transfer one modality to another (e.g., to generate synthetic CT from MR to avoid irradiation) [117], to directly create synthetic volumes using textual prompts [118], or to accelerate image reconstruction [119]. Although these applications are becoming increasingly successful, people should use these methods with special care because they often tend to generate non-existing structures that may misguide medical experts [120]. Deep learning models used for MR-to-CT synthesis have demonstrated impressive results, but careful validation is necessary in order to ensure clinical reliability [121]; reconstructed images should be compared with standard sequences to ensure no degradation or unintended alternations in image quality and anatomy have occurred [122].
AI-based contributions are often incorrectly evaluated [123], leading to incorrect conclusions; for example, most of the contributions in automatic image segmentation evaluate accuracy using annotations from just a single annotator rather than reporting confidence intervals [124,125]. However, if the ground truth is based on annotations from human experts, then any evaluation should be always performed using ground truths annotated by several radiologists and inter-variability should be evaluated. Moreover, deep learning methods inherently suffer from problems with standardization and explainability [126], making it difficult to compare results across different studies and ensure consistency in performance metrics [127]. Even though there are significant attempts to improve the interpretability and explainability of deep learning methods, we are still far from clear interpretations of their performed decisions and recommendations [128].
Lastly, one should remember the vulnerability of CNN-based architectures to adversarial attacks, a significant concern in the field of deep learning, as they involve making small, often imperceptible, changes to input images that can cause a CNN to make incorrect predictions. Even a single pixel modification or added learned adversarial pattern to the input may completely fool a deep network [129], a situation that may be dangerous because external modifications to the input volumes, completely invisible to radiologists, may completely change the neural network recommendations. Even though there are methods with which to detect adversarial attacks using feature response maps, the users of CNNs should be aware of the risk [130]; more recent architectures, based on vision transformers (ViTs), are more resistant to certain types of adversarial attacks compared to CNNs [131], but it is important to note that no model is entirely immune to such abuse [132].

9. Review of AI Products Used in Radiology: Status in 2024

The review of AI products used in radiology in 2024 was conducted utilizing the two following key databases: the Health AI Register [133] and AI Central [134]. The Health AI Register lists AI products available in the European market, all of which are CE-marked, indicating compliance with European regulatory standards; conversely, AI Central focuses on AI tools cleared by the FDA and commercially available in the United States. Our dual-database approach ensured comprehensive coverage of the radiology AI landscape, and the review process involved counting and cataloging available products, analyzing their distribution across radiological subspecialties, imaging modalities, and targeted diseases. Regulatory certifications (CE marking and FDA clearance) were systematically reviewed, alongside data on the year of market entry and FDA clearance. This review was conducted in October 2024 to capture the most current state of the field.
The current status of AI applications used in Europe was reviewed on the basis of the Health AI Register [133], which provides the names of applications along with their subspeciality modality and type of certification approved in the EU market. The same system was used for a similar analysis by Van Leuven in 2021 [20]; however, the landscape of the AI market has since significantly changed, and our analysis reflects the rate of development for AI solutions in the healthcare sector, revealing that, as of October 2024, there are 222 commercial AI-based products, representing an increase of 122%—among these, 213 products are reported to be certified, marking a 150% increase compared to the 85 certified products reported in 2021 (Figure 6).
The focus on neuroimaging and chest imaging, with 73 and 71 AI products, respectively, suggests a strong emphasis on developing AI applications in these areas, which may be attributed to high clinical demand and the complexity of interpreting neuroimaging and chest images, making these subspecialties ideal for AI innovation [135,136]. Areas such as musculoskeletal (MSK), abdominal, cardiac, and breast imaging also show considerable development, with product counts between 20 and 28, indicating significant interest and activity; in contrast, subspecialties including vascular, head/neck, spine, thyroid, and FDG PET-CT imaging have relatively few AI products (between 1 and 2), possibly reflecting lower demand, limited dataset availability, or less complex imaging challenges [137,138,139]. The above distributions highlight the concentrated development of AI tools in high-demand areas of medical imaging, with the potential for further growth as new needs and opportunities emerge across other subspecialties (Figure 7).
In terms of imaging modalities, CT and MR stand out, with the highest numbers of AI products—89 and 66, respectively—likely due to the extensive use and high diagnostic value of these modalities, as well as the complexity of their data, which makes them particularly well-suited for AI applications [140]. X-ray imaging follows, with 46 products, emphasizing a strong focus on AI in this widely utilized modality [141]; meanwhile, mammography and ultrasound, with 16 and 10 products, respectively, exhibit moderate levels of AI development, likely due to the specialized nature of mammography and the inherent variability of ultrasound imaging, which present unique challenges for AI algorithms. PET and SPECT, with only 3 and 1 products, respectively, represent the lowest levels of AI development among the imaging modalities, possibly due to their lower usage rates and specialized applications in nuclear medicine [142]. Overall, this distribution reflects the alignment of AI product development with high-demand, complex imaging modalities within the healthcare sector (Figure 8).
Regarding targeted diseases, lung cancer, stroke, and breast cancer lead, with 28, 24, and 19 AI products, respectively, underscoring the prevalence and clinical importance of these conditions [143] and suggesting that the complexity of diagnosis and the high clinical impacts of these diseases make them attractive areas for AI applications. Other notable areas of AI development include pneumothorax, with 16 products, and dementia, with 12 products, indicating growing interest in respiratory and neurodegenerative conditions. Moderate levels of AI product development are seen in diseases such as multiple sclerosis (11 products), emphysema (9 products), prostate cancer (9 products), pleural effusion (9 products), and tuberculosis (9 products), indicating valuable opportunities for AI applications in these areas as well. Diseases including pulmonary embolism and consolidation, pneumonia, COPD, COVID-19, and intracranial hemorrhage, with product counts ranging from 5 to 7, represent the least targeted conditions, potentially reflecting emerging areas in which AI applications are still in the early stages. Targeted disease distribution indicates a strong focus on high-impact, high-prevalence diseases [144,145] while also highlighting the potential for continued growth in AI-driven solutions for a broader range of diseases [146,147,148,149,150,151] (Figure 9).
A majority of AI products are certified as Class IIa under the Medical Device Directive (MDD), totaling 66 products, followed by Class I under the MDD, with 58 products, and Class IIa under the Medical Device Regulation (MDR), with 49 products; these categories represent the bulk of certified AI products, indicating a high level of compliance with established regulatory standards. Class IIb under the MDR comprises 30 products, showing a significant but smaller representation, while there are 9 products that are not yet certified or have no certification, suggesting that they may be in development or awaiting regulatory approval. Smaller categories include “Certified, Class unknown” (8 products), “Certified, Class I, MDR” (1 product), and “Certified, Class IIb, MDD” (1 product). The concentration of AI products in Class IIa and I certifications under both the MDD and MDR reflects a strong effort to comply with regulatory frameworks, especially for products targeting moderate-risk classifications; this distribution underscores the critical role of regulatory certification in the development and deployment of AI products in healthcare (Figure 10).
In terms of the market entry year, AI product releases before 2014 represent a notable group, with 18 entries, reflecting early AI market activity; from 2014 to 2016, there was a gradual increase in market entries, though still with small numbers each year; however, beginning in 2017, a steady rise occurred, with 15 products in 2017 and 21 in 2018, an upward trend that reached a peak in 2020, with 50 products, indicating a surge in AI adoption, likely driven by technological advancements and growing market demand [152]. Although 2021 saw a slight decline, to 31 products, it still represented high market activity; from 2022 to 2024, the number of entries continued to decrease, with 8, 13, and 3 products, respectively, possibly reflecting a maturing market, saturation in product development, or a shift in focus toward other healthcare areas—a trend which demonstrates the rapid expansion and growth of the AI market in healthcare, with a potential move toward saturation in recent years [153,154,155,156,157]. The peak in AI radiology product approvals in 2020 can be attributed to several key factors: during this period, a wealth of COVID-19 imaging datasets became publicly available, offering researchers the crucial data needed to train and validate AI models effectively; additionally, significant increases in funding and grants were directed toward AI research, particularly in response to the pandemic [158] (Figure 11).
The chart in Figure 11 depicts the market entry year distribution for AI products, while that in Figure 12 presents the annual number of AI products cleared for use in radiology from 2008 to 2024. A comparison of these data provides insights into the dynamics of AI adoption in radiology as we can observe a significant growth phase starting around 2017, with surges in product entries and clearances. The peak in market entries occurred in 2020, with 50 new products entering the market, while product clearances peaked slightly later, in 2023—a lag that suggests that products entering the market require time for regulatory approval [159,160].
After 2020, the number of new product entries declined steadily, with only three entries recorded in 2024; this trend reflects a possible stabilization or maturation of the AI radiology market in which fewer groundbreaking entries were being introduced. The slight decline in product clearances in 2024, following a peak in 2023, suggests market saturation. The regulatory landscape may be stabilizing and the focus might be shifting, from introducing new products to refining and deploying existing solutions [161].
The initial growth phase and subsequent stabilization observed in Figure 11 and Figure 12 highlight the natural lifecycle of an emerging technology market, in which the early years are marked by innovation and rapid expansion, leading to a peak, followed by market consolidation and saturation as competition and regulatory frameworks mature [162,163].
In this work, we emphasize the rapid development in the AI medical market in recent years. Three years ago, Van Leeuwen et al. [20] described the landscape of artificial intelligence (AI) products in radiology, identifying 100 commercially available CE-marked solutions in the European market. In the current work, on the basis of the same vendor-supplied noncommercial database, we were able to distinguish over 220 products (www.radiology.healthairegister.com (accessed on 28 August 2024)), a result that is unlikely to reflect all available products, which becomes evident when different databases are compared, for instance, AI and radiology databases (https://aiandradiology.com (accessed on 30.08.2024)), used to collect and review products present in the domestic Polish market. The Health AI Register database is the largest and most reflective as it is the most comprehensive database, including the process of certification. Three years ago, 85 out of 100 reported products were validated, while, currently, 213 out of 222 registered products to date are certified among the different classes (CE I–III), representing an important shift toward their acceptance by medical boards and easing their implementation into different medical institutions (Figure 3). Wu [164] investigated the adoption and usage of over 500 FDA-approved AI medical devices in the U.S. in 2023, focusing on 16 specific procedures that are billable through AI-specific CPT codes, spanning a variety of medical domains, including cardiology, ophthalmology, radiology, and liver health, with the most prevalent applications being coronary artery disease and diabetic retinopathy. Recently, in the U.S., 700 cleared AI algorithms were reported by Fornell, with 76% present in radiology but covering various specialties (radiology: 527; cardiology: 71; neurology: 16; hematology: 14; gastroenterology and urology: 10; clinical chemistry: 7; ophthalmic: 7; general and plastic surgery: 5; anesthesiology: 5; pathology: 4; microbiology: 4; general hospital: 3; orthopedic: 3; ear, nose, and throat: 2; and dental: 1). The above comparison illustrates the differences between the European and U.S. markets and reflects adoption obstacles within the European market, indicating a significantly larger size of the U.S. market and forecasting rapid development in the future [165,166].
Varghese [167] described several key challenges in the clinical adoption of artificial intelligence in medicine, organizing them under the RISE framework. Regulatory approval and sufficient evidence remain significant hurdles as many AI systems lack the rigorous prospective and multicenter clinical trials necessary for validation [22,168,169]; additionally, the majority of studies focus on retrospective or theoretical aspects, delaying their translation into clinical practice. Interpretability also poses a barrier as high-performing AI models, particularly those utilizing deep learning architectures, often lack transparency in their decision-making processes although clinicians are more likely to adopt AI systems that provide human-understandable explanations for their outputs. Additionally, interoperability challenges hinder the integration of AI systems into clinical workflows as effective communication between diverse hospital information systems, while preserving the meaning and context of medical data, is required. Finally, AI systems depend on high-quality and structured data, which are often limited, and reliance on unstructured sources, such as free-text medical records, introduces ambiguity and inaccuracies. The need to address these challenges through robust validation processes explains why AI adoption, despite a need for efficient automated systems in healthcare, is still limited. The development of integrating platforms and the formation of universal algorithms will, therefore, boost further AI implementation into medical practice.

10. Examples of Practical Implementation of AI Models

The safe, effective, and high-quality deployment of AI technologies is critical for advancing medical practice. Several medical environments provide exemplary use cases of robust AI models in detecting various pathologies and improving diagnostic workflows. In 2020, a multihospital experiment in Moscow [170] tested AI solutions for chest X-ray analysis across 178 state healthcare centers. AI frameworks analyzed redirected X-rays to detect abnormalities without prior training data. A top-performing framework employed advanced techniques such as EfficientNets and DenseNet, analyzing 17,888 cases over one month with an overall AUC of 0.77, ranging from 0.55 for herniation to 0.90 for pneumothorax. Robert et al. [171] evaluated the impact of AI as a second reader in detecting and localizing lung nodules on chest radiographs (CXRs) from 40 hospitals across the U.S. The study showed that AI assistance significantly improved diagnostic performance, with the mean AFROC increasing from 0.73 to 0.81 and AUROC from 0.77 to 0.84, along with a sensitivity improvement from 72.8% to 83.5%.
These results highlight AI’s potential as a tool to enhance diagnostic accuracy without increasing false positives. Sacha et al. [172] assessed an AI system for detecting clinically significant prostate cancer on MRI using a retrospective cohort of 10,207 MRI examinations and a multi-reader study with 62 radiologists. The AI system demonstrated superior performance compared to radiologists (AUROC: 0.91 vs. 0.86) and detected 6.8% more significant cancers at the same specificity, emphasizing its potential as a diagnostic support tool. In a meta-analysis by Wang [173], the effectiveness of deep learning algorithms for detecting and segmenting brain metastases on MRI was evaluated. The study identified 42 relevant studies and assessed them using QUADAS-2 and CLAIM tools. The results showed a pooled lesion-wise dice score of 79% and sensitivities of 86% (patient-wise) and 87% (lesion-wise). U-Net models performed best, with accuracy influenced by MRI hardware diversity and slice thickness. These findings underscore deep learning’s promise in brain metastasis diagnostics. Lu et al. [174] conducted a randomized, multi-reader, multi-case study to assess AI-assisted auto-contouring for brain tumor stereotactic radiosurgery (SRS). Nine professionals contoured brain tumors in assisted and unassisted modes. AI significantly improved inter-reader agreement (DSC from 0.86 to 0.90, p < 0.001) and lesion detection sensitivity (91.3% vs. 82.6%, p = 0.030). Additionally, AI assistance enhanced contouring accuracy and reduced time spent by 30.8%, particularly benefiting less-experienced clinicians. Salehi et al. [175] performed a meta-analysis on AI algorithms detecting primary bone tumors, comparing their diagnostic performance to clinicians. Internal validation showed AI sensitivity and specificity at 84% and 86%, respectively, compared to clinicians’ 76% and 64%. In external validation, AI achieved 84% sensitivity and 91% specificity, while clinicians reached 85% and 94%. With AI assistance, clinicians improved sensitivity to 95% but experienced reduced specificity (57%). This study highlights AI’s potential and emphasizes the need for further optimization. Bachmann et al. [176] evaluated the impact of an AI tool on nonspecialist readers detecting traumatic fractures in appendicular skeleton radiographs. Using a multi-reader, multi-case design with 340 radiographic exams, sensitivity increased from 72% to 80% and specificity from 81% to 85% with AI assistance (p < 0.05). Missed fractures decreased by 29%, and false positives by 21%, without affecting reading time. The greatest improvement was in detecting nonobvious fractures, demonstrating AI’s potential to enhance diagnostic performance efficiently. Jalal et al. [177] highlighted AI’s role in emergency radiology, emphasizing its ability to enhance imaging analysis, workflow efficiency, and patient care quality, and mitigate radiologist burnout. Ketola et al. [178] proposed a structured evaluation process for AI applications in radiology, including pre-evaluation, retrospective testing, and prospective clinical integration, to ensure safety, effectiveness, and compatibility with clinical standards.

11. Summary

Artificial intelligence has shown transformative potential in the field of radiology, revolutionizing how medical imaging is interpreted and utilized in clinical practice as AI technologies have demonstrated the ability to automate a wide range of tasks, thereby significantly enhancing efficiency and diagnostic accuracy. Tasks ranging from image segmentation, abnormality detection, and classification of imaging data to more advanced processes, such as report analysis and automated case triaging, are reshaping traditional radiological workflows, thus allowing radiologists to focus on complex cases and critical decision-making, while routine processes are efficiently handled using AI.
In recent years, the field of radiology has experienced an explosion of innovative AI applications as the research has introduced numerous methodologies that leverage deep learning, machine learning, and other AI techniques to address specific challenges in medical imaging; concurrently, the market has witnessed the emergence of many commercial AI products, some of which have obtained regulatory certifications, indicating their readiness for clinical deployment and underscoring the growing roles of AI in both research and practice.
Despite the potential of AI, it is imperative to recognize the ongoing importance of standard image analysis techniques, such as textural analysis and radiomics; these methods, which are rooted in well-established statistical and geometric principles, remain valuable tools for extracting meaningful features from medical images. The advantage of these features is their partial medical interpretability, which is not represented in deep learning models [113]; combining these standard approaches with AI techniques has the potential to create hybrid systems that leverage the strengths of both methods, thus yielding robust, interpretable, and clinically useful outcomes.
Moreover, ethical considerations and adherence to regulatory standards are critical for the successful integration of AI into radiology as protecting patient privacy and ensuring data security must remain fundamental priorities. AI developers and healthcare institutions must comply with the stringent regulations governing the use of medical data in order to maintain trust and integrity in healthcare systems. Additionally, transparency in AI development, from dataset selection to model validation, is vital for addressing potential biases and ensuring equitable outcomes.
In conclusion, while AI offers unprecedented opportunities to advance radiology, its integration requires a balanced approach; acknowledging the continued relevance of standard methods, addressing ethical and regulatory challenges, and fostering a collaborative relationship between radiologists and AI are essential for realizing its full potential. By combining the strengths of AI and traditional methodologies, radiology has the opportunity to achieve new levels of precision, efficiency, and impact in medical imaging.
The combined insights from the data reviewed in this manuscript suggest that the AI radiology market has transitioned from a phase of rapid growth to one of stabilization and potential saturation, a trend that reflects the increasing maturity of the field, in which regulatory processes and product deployment are catching up with the initial surge in innovation. Future efforts may focus more on optimizing existing solutions, rather than introducing entirely new products.

Author Contributions

Conceptualization, R.O., J.L., K.N., A.P. and M.S., methodology, R.O., J.L., K.N., A.P. and M.S.; software, K.N. and M.W.; validation, J.L., K.N., A.P. and M.S.; formal analysis, R.O., J.L., K.N., A.P. and M.S.; investigation, R.O., J.L. and K.N.; resources, R.O. and J.L.; data curation, R.O., J.L. and K.N.; writing—original draft preparation, R.O. and J.L.; writing—review and editing, R.O., J.L., K.N., A.P., M.W. and M.S.; visualization, R.O., J.L., K.N. and M.W.; supervision, J.L., K.N., M.W., A.P. and M.S.; project administration, R.O., J.L., A.P. and M.S.; funding acquisition, R.O., A.P., K.N. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable due to the absence of additional data beyond those included in the content of the work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bajwa, J.; Munir, U.; Nori, A.; Williams, B. Artificial Intelligence in Healthcare: Transforming the Practice of Medicine. Future Healthc. J. 2021, 8, e188–e194. [Google Scholar] [CrossRef]
  2. Morris, M.A.; Saboury, B.; Burkett, B.; Gao, J.; Siegel, E.L. Reinventing Radiology: Big Data and the Future of Medical Imaging. J. Thorac. Imaging 2018, 33, 4–16. [Google Scholar] [CrossRef] [PubMed]
  3. Qureshi, J.; Iqbal, S.; Ahmad, N. The Radiological Renaissance: Transformative Advances in Emergency Care. Cosm. J. Biol. 2024, 2, 62–69. [Google Scholar]
  4. Phillips-Wren, G. Ai Tools in Decision Making Support Systems: A Review. Int. J. Artif. Intell. Tools 2012, 21, 1240005. [Google Scholar] [CrossRef]
  5. Soori, M.; Jough, F.K.G.; Dastres, R.; Arezoo, B. AI-Based Decision Support Systems in Industry 4.0, A Review. J. Econ. Technol. 2024. [Google Scholar] [CrossRef]
  6. Pierce, R.L.; Van Biesen, W.; Van Cauwenberge, D.; Decruyenaere, J.; Sterckx, S. Explainability in Medicine in an Era of AI-Based Clinical Decision Support Systems. Front. Genet. 2022, 13, 903600. [Google Scholar] [CrossRef] [PubMed]
  7. Raparthi, M.; Gayam, S.R.; Kasaraneni, B.P.; Kondapaka, K.K.; Pattyam, S.P.; Putha, S.; Kuna, S.S.; Nimmagadda, V.S.P.; Sahu, M.K.; Thuniki, P. AI-Driven Decision Support Systems for Precision Medicine: Examining the Development and Implementation of AI-Driven Decision Support Systems in Precision Medicine. J. Artif. Intell. Res. 2021, 1, 11–20. [Google Scholar]
  8. Mintz, Y.; Brodie, R. Introduction to Artificial Intelligence in Medicine. Minim. Invasive Ther. Allied Technol. 2019, 28, 73–81. [Google Scholar] [CrossRef] [PubMed]
  9. Lai, V.; Chen, C.; Liao, Q.V.; Smith-Renner, A.; Tan, C. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. arXiv 2021, arXiv:2112.11471. [Google Scholar]
  10. De Vreede, T.; Raghavan, M.; De Vreede, G.-J. Design Foundations for AI Assisted Decision Making: A Self Determination Theory Approach. In Proceedings of the 54th Hawaii International Conference on System Sciences, Kauai, HI, USA, 5–8 January 2021. [Google Scholar]
  11. Yang, X.; Aurisicchio, M. Designing Conversational Agents: A Self-Determination Theory Approach. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, Yokohama, Japan, 6 May 2021; pp. 1–16. [Google Scholar]
  12. Wang, F.-Y.; Zhang, J.J.; Zheng, X.; Wang, X.; Yuan, Y.; Dai, X.; Zhang, J.; Yang, L. Where Does AlphaGo Go: From Church-Turing Thesis to AlphaGo Thesis and Beyond. IEEE/CAA J. Autom. Sin. 2016, 3, 113–120. [Google Scholar] [CrossRef]
  13. Hamet, P.; Tremblay, J. Artificial Intelligence in Medicine. Metab. Clin. Exp. 2017, 69, S36–S40. [Google Scholar] [CrossRef] [PubMed]
  14. Amisha; Malik, P.; Pathania, M.; Rathaur, V. Overview of Artificial Intelligence in Medicine. J. Fam. Med. Prim. Care 2019, 8, 2328. [Google Scholar] [CrossRef] [PubMed]
  15. Rezazade Mehrizi, M.H.; Van Ooijen, P.; Homan, M. Applications of Artificial Intelligence (AI) in Diagnostic Radiology: A Technography Study. Eur. Radiol. 2021, 31, 1805–1811. [Google Scholar] [CrossRef]
  16. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J.W.L. Artificial Intelligence in Radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  17. Tariq, A.; Purkayastha, S.; Padmanaban, G.P.; Krupinski, E.; Trivedi, H.; Banerjee, I.; Gichoya, J.W. Current Clinical Applications of Artificial Intelligence in Radiology and Their Best Supporting Evidence. J. Am. Coll. Radiol. 2020, 17, 1371–1381. [Google Scholar] [CrossRef] [PubMed]
  18. Kapoor, N.; Lacson, R.; Khorasani, R. Workflow Applications of Artificial Intelligence in Radiology and an Overview of Available Tools. J. Am. Coll. Radiol. 2020, 17, 1363–1370. [Google Scholar] [CrossRef]
  19. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional Neural Networks: An Overview and Application in Radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed]
  20. Van Leeuwen, K.G.; Schalekamp, S.; Rutten, M.J.C.M.; Van Ginneken, B.; De Rooij, M. Artificial Intelligence in Radiology: 100 Commercially Available Products and Their Scientific Evidence. Eur. Radiol. 2021, 31, 3797–3804. [Google Scholar] [CrossRef]
  21. Wichmann, J.L.; Willemink, M.J.; De Cecco, C.N. Artificial Intelligence and Machine Learning in Radiology: Current State and Considerations for Routine Clinical Implementation. Invest. Radiol. 2020, 55, 619–627. [Google Scholar] [CrossRef]
  22. Pesapane, F.; Summers, P. Ethics and Regulations for AI in Radiology. In Artificial Intelligence for Medicine; Elsevier: Amsterdam, The Netherlands, 2024; pp. 179–192. ISBN 978-0-443-13671-9. [Google Scholar]
  23. Maleki, F.; Muthukrishnan, N.; Ovens, K.; Reinhold, C.; Forghani, R. Machine Learning Algorithm Validation. Neuroimaging Clin. N. Am. 2020, 30, 433–445. [Google Scholar] [CrossRef] [PubMed]
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  25. Sutskever, I. Training Recurrent Neural Networks. Ph.D. Thesis, University of Toronto, Toronto, ON, Canada, 2013. [Google Scholar]
  26. Wu, Z.; Shen, C.; Van Den Hengel, A. Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Pattern Recognit. 2019, 90, 119–133. [Google Scholar] [CrossRef]
  27. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Kai, L.; Li, F.-F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, June 20-25 June 2009; pp. 248–255. [Google Scholar]
  28. Witt, S. How Jensen Huang’s Nvidia Is Powering the A.I. Revolution. The New Yorker. 2023. Available online: https://www.newyorker.com/magazine/2023/12/04/how-jensen-huangs-nvidia-is-powering-the-ai-revolution (accessed on 1 December 2024).
  29. Goriparthi, R.; Luqman, S. Deep Learning Architectures for Real-Time Image Recognition: Innovations and Applications. J. Data Sci. Intell. Syst. 2024, 15, 880–907. [Google Scholar]
  30. Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A Survey of the Recent Architectures of Deep Convolutional Neural Networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef]
  31. Cheng, J.; Wang, P.; Li, G.; Hu, Q.; Lu, H. Recent Advances in Efficient Computation of Deep Convolutional Neural Networks. Front. Inf. Technol. Electron. Eng. 2018, 19, 64–77. [Google Scholar] [CrossRef]
  32. Ali, I.; Hart, G.R.; Gunabushanam, G.; Liang, Y.; Muhammad, W.; Nartowt, B.; Kane, M.; Ma, X.; Deng, J. Lung Nodule Detection via Deep Reinforcement Learning. Front. Oncol. 2018, 8, 108. [Google Scholar] [CrossRef]
  33. Obuchowicz, R.; Strzelecki, M.; Piórkowski, A. Clinical Applications of Artificial Intelligence in Medical Imaging and Image Processing—A Review. Cancers 2024, 16, 1870. [Google Scholar] [CrossRef] [PubMed]
  34. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  35. Schulz, S.; Woerl, A.-C.; Jungmann, F.; Glasner, C.; Stenzel, P.; Strobl, S.; Fernandez, A.; Wagner, D.-C.; Haferkamp, A.; Mildenberger, P.; et al. Multimodal Deep Learning for Prognosis Prediction in Renal Cancer. Front. Oncol. 2021, 11, 788740. [Google Scholar] [CrossRef] [PubMed]
  36. Yi, P.H.; Kim, T.K.; Yu, A.C.; Bennett, B.; Eng, J.; Lin, C.T. Can AI Outperform a Junior Resident? Comparison of Deep Neural Network to First-Year Radiology Residents for Identification of Pneumothorax. Emerg. Radiol. 2020, 27, 367–375. [Google Scholar] [CrossRef] [PubMed]
  37. Chamberlin, J.H.; Aquino, G.; Nance, S.; Wortham, A.; Leaphart, N.; Paladugu, N.; Brady, S.; Baird, H.; Fiegel, M.; Fitzpatrick, L.; et al. Automated Diagnosis and Prognosis of COVID-19 Pneumonia from Initial ER Chest X-Rays Using Deep Learning. BMC Infect. Dis. 2022, 22, 637. [Google Scholar] [CrossRef]
  38. Abrahamsen, B.S.; Knudtsen, I.S.; Eikenes, L.; Bathen, T.F.; Elschot, M. Pelvic PET/MR Attenuation Correction in the Image Space Using Deep Learning. Front. Oncol. 2023, 13, 1220009. [Google Scholar] [CrossRef] [PubMed]
  39. Wang, C.; Shao, J.; Xu, X.; Yi, L.; Wang, G.; Bai, C.; Guo, J.; He, Y.; Zhang, L.; Yi, Z.; et al. DeepLN: A Multi-Task AI Tool to Predict the Imaging Characteristics, Malignancy and Pathological Subtypes in CT-Detected Pulmonary Nodules. Front. Oncol. 2022, 12, 683792. [Google Scholar] [CrossRef] [PubMed]
  40. Albiol, A.; Albiol, F.; Paredes, R.; Plasencia-Martínez, J.M.; Blanco Barrio, A.; Santos, J.M.G.; Tortajada, S.; González Montaño, V.M.; Rodríguez Godoy, C.E.; Fernández Gómez, S.; et al. A Comparison of COVID-19 Early Detection between Convolutional Neural Networks and Radiologists. Insights Imaging 2022, 13, 122. [Google Scholar] [CrossRef] [PubMed]
  41. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
  42. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  43. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2016. [Google Scholar]
  44. Fink, A.; Tran, H.; Reisert, M.; Rau, A.; Bayer, J.; Kotter, E.; Bamberg, F.; Russe, M.F. A Deep Learning Approach for Projection and Body-Side Classification in Musculoskeletal Radiographs. Eur. Radiol. Exp. 2024, 8, 23. [Google Scholar] [CrossRef] [PubMed]
  45. Arbabshirani, M.R.; Fornwalt, B.K.; Mongelluzzo, G.J.; Suever, J.D.; Geise, B.D.; Patel, A.A.; Moore, G.J. Advanced Machine Learning in Action: Identification of Intracranial Hemorrhage on Computed Tomography Scans of the Head with Clinical Workflow Integration. NPJ Digit. Med. 2018, 1, 9. [Google Scholar] [CrossRef] [PubMed]
  46. Nguyen, T.P.; Jung, J.W.; Yoo, Y.J.; Choi, S.H.; Yoon, J. Intelligent Evaluation of Global Spinal Alignment by a Decentralized Convolutional Neural Network. J. Digit. Imaging 2022, 35, 213–225. [Google Scholar] [CrossRef]
  47. Solak, A.; Ceylan, R.; Bozkurt, M.A.; Cebeci, H.; Koplay, M. Adrenal Lesion Classification with Abdomen Caps and the Effect of ROI Size. Phys. Eng. Sci. Med. 2023, 46, 865–875. [Google Scholar] [CrossRef]
  48. Gasulla, Ó.; Ledesma-Carbayo, M.J.; Borrell, L.N.; Fortuny-Profitós, J.; Mazaira-Font, F.A.; Barbero Allende, J.M.; Alonso-Menchén, D.; García-Bennett, J.; Del Río-Carrrero, B.; Jofré-Grimaldo, H.; et al. Enhancing Physicians’ Radiology Diagnostics of COVID-19’s Effects on Lung Health by Leveraging Artificial Intelligence. Front. Bioeng. Biotechnol. 2023, 11, 1010679. [Google Scholar] [CrossRef] [PubMed]
  49. Santhanam, P.; Nath, T.; Peng, C.; Bai, H.; Zhang, H.; Ahima, R.S.; Chellappa, R. Artificial Intelligence and Body Composition. Diabetes Metab. Syndr.: Clin. Res. Rev. 2023, 17, 102732. [Google Scholar] [CrossRef] [PubMed]
  50. Willemink, M.J.; Roth, H.R.; Sandfort, V. Toward Foundational Deep Learning Models for Medical Imaging in the New Era of Transformer Networks. Radiol. Artif. Intell. 2022, 4, e210284. [Google Scholar] [CrossRef]
  51. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Newry, UK, 2017; Volume 30. [Google Scholar]
  52. Larson, N.; Nguyen, C.; Do, B.; Kaul, A.; Larson, A.; Wang, S.; Wang, E.; Bultman, E.; Stevens, K.; Pai, J.; et al. Artificial Intelligence System for Automatic Quantitative Analysis and Radiology Reporting of Leg Length Radiographs. J. Digit. Imaging 2022, 35, 1494–1505. [Google Scholar] [CrossRef] [PubMed]
  53. Nurzynska, K.; Li, D.; Walts, A.E.; Gertych, A. Multilayer Outperforms Single-Layer Slide Scanning in AI-Based Classification of Whole Slide Images with Low-Burden Acid-Fast Mycobacteria (AFB). Comput. Methods Programs Biomed. 2023, 234, 107518. [Google Scholar] [CrossRef] [PubMed]
  54. Haji Maghsoudi, O.; Gastounioti, A.; Scott, C.; Pantalone, L.; Wu, F.-F.; Cohen, E.A.; Winham, S.; Conant, E.F.; Vachon, C.; Kontos, D. Deep-LIBRA: An Artificial-Intelligence Method for Robust Quantification of Breast Density with Independent Validation in Breast Cancer Risk Assessment. Med. Image Anal. 2021, 73, 102138. [Google Scholar] [CrossRef]
  55. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. ISBN 978-3-319-24573-7. [Google Scholar]
  56. Stember, J.N.; Shalu, H. Deep Neuroevolution Squeezes More Out of Small Neural Networks and Small Training Sets: Sample Application to MRI Brain Sequence Classification. In International Symposium on Intelligent Informatics; Thampi, S.M., Mukhopadhyay, J., Paprzycki, M., Li, K.-C., Eds.; Smart Innovation, Systems and Technologies; Springer Nature: Singapore, 2023; Volume 333, pp. 153–167. ISBN 978-981-19-8093-0. [Google Scholar]
  57. Seker, M.E.; Koyluoglu, Y.O.; Ozaydin, A.N.; Gurdal, S.O.; Ozcinar, B.; Cabioglu, N.; Ozmen, V.; Aribal, E. Diagnostic Capabilities of Artificial Intelligence as an Additional Reader in a Breast Cancer Screening Program. Eur. Radiol. 2024, 34, 6145–6157. [Google Scholar] [CrossRef] [PubMed]
  58. Zhang, S.; Xin, X.; Wang, Y.; Guo, Y.; Hao, Q.; Yang, X.; Wang, J.; Zhang, J.; Zhang, B.; Wang, W. Automated Radiological Report Generation For Chest X-Rays With Weakly-Supervised End-to-End Deep Learning. arXiv 2020, arXiv:2006.10347. [Google Scholar]
  59. Bassi, P.R.A.S.; Yavuz, M.C.; Wang, K.; Chen, X.; Li, W.; Decherchi, S.; Cavalli, A.; Yang, Y.; Yuille, A.; Zhou, Z. RadGPT: Constructing 3D Image-Text Tumor Datasets. arXiv 2025, arXiv:2501.04678. [Google Scholar]
  60. Sun, Z.; Ong, H.; Kennedy, P.; Tang, L.; Chen, S.; Elias, J.; Lucas, E.; Shih, G.; Peng, Y. Evaluating GPT4 on Impressions Generation in Radiology Reports. Radiology 2023, 307, e231259. [Google Scholar] [CrossRef] [PubMed]
  61. Kathait, A.S.; Garza-Frias, E.; Sikka, T.; Schultz, T.J.; Bizzo, B.; Kalra, M.K.; Dreyer, K.J. Assessing Laterality Errors in Radiology: Comparing Generative Artificial Intelligence and Natural Language Processing. J. Am. Coll. Radiol. 2024, 21, 1575–1582. [Google Scholar] [CrossRef]
  62. Butler, J.J.; Harrington, M.C.; Tong, Y.; Rosenbaum, A.J.; Samsonov, A.P.; Walls, R.J.; Kennedy, J.G. From Jargon to Clarity: Improving the Readability of Foot and Ankle Radiology Reports with an Artificial Intelligence Large Language Model. Foot Ankle Surg. 2024, 30, 331–337. [Google Scholar] [CrossRef]
  63. Adams, L.C.; Truhn, D.; Busch, F.; Kader, A.; Niehues, S.M.; Makowski, M.R.; Bressem, K.K. Leveraging GPT-4 for Post Hoc Transformation of Free-Text Radiology Reports into Structured Reporting: A Multilingual Feasibility Study. Radiology 2023, 307, e230725. [Google Scholar] [CrossRef] [PubMed]
  64. Matute-González, M.; Darnell, A.; Comas-Cufí, M.; Pazó, J.; Soler, A.; Saborido, B.; Mauro, E.; Turnes, J.; Forner, A.; Reig, M.; et al. Utilizing a Domain-Specific Large Language Model for LI-RADS V2018 Categorization of Free-Text MRI Reports: A Feasibility Study. Insights Imaging 2024, 15, 280. [Google Scholar] [CrossRef] [PubMed]
  65. Zhou, S.K.; Greenspan, H.; Davatzikos, C.; Duncan, J.S.; Van Ginneken, B.; Madabhushi, A.; Prince, J.L.; Rueckert, D.; Summers, R.M. A Review of Deep Learning in Medical Imaging: Imaging Traits, Technology Trends, Case Studies With Progress Highlights, and Future Promises. Proc. IEEE 2021, 109, 820–838. [Google Scholar] [CrossRef] [PubMed]
  66. Najjar, R. Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging. Diagnostics 2023, 13, 2760. [Google Scholar] [CrossRef] [PubMed]
  67. A. Semary, N.; Ahmed, W.; Amin, K.; Pławiak, P.; Hammad, M. Enhancing Machine Learning-Based Sentiment Analysis through Feature Extraction Techniques. PLoS ONE 2024, 19, e0294968. [Google Scholar] [CrossRef]
  68. Davis, E. Benchmarks for Automated Commonsense Reasoning: A Survey. ACM Comput. Surv. 2024, 56, 81. [Google Scholar] [CrossRef]
  69. Huang, Y.; Zhang, J.; Shan, Z.; He, J. Compression Represents Intelligence Linearly. arXiv 2024, arXiv:2404.09937. [Google Scholar]
  70. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  71. Alshqaqeeq, F.; McGuire, C.; Overcash, M.; Ali, K.; Twomey, J. Choosing Radiology Imaging Modalities to Meet Patient Needs with Lower Environmental Impact. Resour. Conserv. Recycl. 2020, 155, 104657. [Google Scholar] [CrossRef]
  72. Wang, P.; Vasconcelos, N. A Machine Teaching Framework for Scalable Recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 11–17 October 2021; pp. 4945–4954. [Google Scholar]
  73. Raina, R.; Battle, A.; Lee, H.; Packer, B.; Ng, A.Y. Self-Taught Learning: Transfer Learning from Unlabeled Data. In Proceedings of the 24th International Conference on Machine Learning, Corvallis, OR, USA, 20–24 June 2007; pp. 759–766. [Google Scholar]
  74. Pfob, A.; Lu, S.-C.; Sidey-Gibbons, C. Machine Learning in Medicine: A Practical Introduction to Techniques for Data Pre-Processing, Hyperparameter Tuning, and Model Comparison. BMC Med. Res. Methodol. 2022, 22, 282. [Google Scholar] [CrossRef] [PubMed]
  75. Borgli, R.J.; Kvale Stensland, H.; Riegler, M.A.; Halvorsen, P. Automatic Hyperparameter Optimization for Transfer Learning on Medical Image Datasets Using Bayesian Optimization. In Proceedings of the 2019 13th International Symposium on Medical Information and Communication Technology (ISMICT), Oslo, Norway, 8–10 May 2019; pp. 1–6. [Google Scholar]
  76. Bradshaw, T.J.; Huemann, Z.; Hu, J.; Rahmim, A. A Guide to Cross-Validation for Artificial Intelligence in Medical Imaging. Radiol. Artif. Intell. 2023, 5, e220232. [Google Scholar] [CrossRef] [PubMed]
  77. Coppola, F.; Faggioni, L.; Gabelloni, M.; De Vietro, F.; Mendola, V.; Cattabriga, A.; Cocozza, M.A.; Vara, G.; Piccinino, A.; Lo Monaco, S.; et al. Human, All Too Human? An All-Around Appraisal of the “Artificial Intelligence Revolution” in Medical Imaging. Front. Psychol. 2021, 12, 710982. [Google Scholar] [CrossRef] [PubMed]
  78. Jun-Yong, H.; Hyun, P.S.; Young-Jin, J. Artificial Intelligence Based Medical Imaging: An Overview. J. Radiol. Sci. Technol. 2020, 43, 195–208. [Google Scholar] [CrossRef]
  79. Tran, K.; Bøtker, J.P.; Aframian, A.; Memarzadeh, K. Artificial Intelligence for Medical Imaging. In Artificial Intelligence in Healthcare; Elsevier: Amsterdam, The Netherlands, 2020; pp. 143–162. ISBN 978-0-12-818438-7. [Google Scholar]
  80. Hasani, N.; Morris, M.A.; Rahmim, A.; Summers, R.M.; Jones, E.; Siegel, E.; Saboury, B. Trustworthy Artificial Intelligence in Medical Imaging. PET Clin. 2022, 17, 1–12. [Google Scholar] [CrossRef]
  81. Jiang, C.; Huang, Z.; Pedapati, T.; Chen, P.-Y.; Sun, Y.; Gao, J. Network Properties Determine Neural Network Performance. Nat. Commun. 2024, 15, 5718. [Google Scholar] [CrossRef]
  82. Kaur, P.; Singh, R.K. A Review on Optimization Techniques for Medical Image Analysis. Concurr. Comput. 2023, 35, e7443. [Google Scholar] [CrossRef]
  83. Kim, Y.J.; Kim, K.G. Development of an Optimized Deep Learning Model for Medical Imaging. J. Korean Soc. Radiol. 2020, 81, 1274. [Google Scholar] [CrossRef]
  84. Centeno-Telleria, M.; Zulueta, E.; Fernandez-Gamiz, U.; Teso-Fz-Betoño, D.; Teso-Fz-Betoño, A. Differential Evolution Optimal Parameters Tuning with Artificial Neural Network. Mathematics 2021, 9, 427. [Google Scholar] [CrossRef]
  85. Rajbhandari, S.; Rasley, J.; Ruwase, O.; He, Y. ZeRO: Memory Optimizations Toward Training Trillion Parameter Models. In Proceedings of the SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, Atlanta, GA, USA, 9–19 November 2019. [Google Scholar]
  86. Ji, J.L.; Spronk, M.; Kulkarni, K.; Repovš, G.; Anticevic, A.; Cole, M.W. Mapping the Human Brain’s Cortical-Subcortical Functional Network Organization. NeuroImage 2019, 185, 35–57. [Google Scholar] [CrossRef]
  87. Maguire, P.; Moser, P.; Maguire, R. Understanding Consciousness as Data Compression. J. Cogn. Sci. 2016, 17, 63–94. [Google Scholar] [CrossRef]
  88. Waqas, A.; Bui, M.M.; Glassy, E.F.; El Naqa, I.; Borkowski, P.; Borkowski, A.A.; Rasool, G. Revolutionizing Digital Pathology With the Power of Generative Artificial Intelligence and Foundation Models. Lab. Investig. 2023, 103, 100255. [Google Scholar] [CrossRef]
  89. Busby, L.P.; Courtier, J.L.; Glastonbury, C.M. Bias in Radiology: The How and Why of Misses and Misinterpretations. RadioGraphics 2018, 38, 236–247. [Google Scholar] [CrossRef]
  90. Tee, Q.X.; Nambiar, M.; Stuckey, S. Error and Cognitive Bias in Diagnostic Radiology. J. Med. Imag. Rad. Onc 2022, 66, 202–207. [Google Scholar] [CrossRef] [PubMed]
  91. Dujmović, M.; Malhotra, G.; Bowers, J.S. What Do Adversarial Images Tell Us about Human Vision? eLife 2020, 9, e55978. [Google Scholar] [CrossRef] [PubMed]
  92. Rashid, A.B.; Kausik, M.A.K. AI Revolutionizing Industries Worldwide: A Comprehensive Overview of Its Diverse Applications. Hybrid. Adv. 2024, 7, 100277. [Google Scholar] [CrossRef]
  93. Yousaf Gill, A.; Saeed, A.; Rasool, S.; Husnain, A.; Khawar Hussain, H. Revolutionizing Healthcare: How Machine Learning Is Transforming Patient Diagnoses—A Comprehensive Review of AI’s Impact on Medical Diagnosis. JWS 2023, 2, 1638–1652. [Google Scholar] [CrossRef]
  94. Recht, M.P.; Dewey, M.; Dreyer, K.; Langlotz, C.; Niessen, W.; Prainsack, B.; Smith, J.J. Integrating Artificial Intelligence into the Clinical Practice of Radiology: Challenges and Recommendations. Eur. Radiol. 2020, 30, 3576–3584. [Google Scholar] [CrossRef] [PubMed]
  95. Willemink, M.J.; Koszek, W.A.; Hardell, C.; Wu, J.; Fleischmann, D.; Harvey, H.; Folio, L.R.; Summers, R.M.; Rubin, D.L.; Lungren, M.P. Preparing Medical Imaging Data for Machine Learning. Radiology 2020, 295, 4–15. [Google Scholar] [CrossRef]
  96. Li, X.; Morgan, P.S.; Ashburner, J.; Smith, J.; Rorden, C. The First Step for Neuroimaging Data Analysis: DICOM to NIfTI Conversion. J. Neurosci. Methods 2016, 264, 47–56. [Google Scholar] [CrossRef]
  97. Tovino, S.A. The HIPAA Privacy Rule and the EU GDPR: Illustrative Comparisons. Seton Hall. Law. Rev. 2017, 47, 973–993. [Google Scholar] [PubMed]
  98. Mayerhoefer, M.E.; Materka, A.; Langs, G.; Häggström, I.; Szczypiński, P.; Gibbs, P.; Cook, G. Introduction to Radiomics. J. Nucl. Med. 2020, 61, 488–495. [Google Scholar] [CrossRef] [PubMed]
  99. Vliegenthart, R.; Fouras, A.; Jacobs, C.; Papanikolaou, N. Innovations in Thoracic Imaging: CT, Radiomics, AI and X-ray Velocimetry. Respirology 2022, 27, 818–833. [Google Scholar] [CrossRef]
  100. Kunimatsu, A.; Yasaka, K.; Akai, H.; Sugawara, H.; Kunimatsu, N.; Abe, O. Texture Analysis in Brain Tumor MR Imaging. MRMS 2022, 21, 95–109. [Google Scholar] [CrossRef]
  101. Obuchowicz, R.; Kruszyńska, J.; Strzelecki, M. Classifying Median Nerves in Carpal Tunnel Syndrome: Ultrasound Image Analysis. Biocybern. Biomed. Eng. 2021, 41, 335–351. [Google Scholar] [CrossRef]
  102. Materka, A.; Strzelecki, M. On the Importance of MRI Nonuniformity Correction for Texture Analysis. In Proceedings of the IEEE International Conference on Signal Processing, Algorithms, Architectures, Arrangements and Applications, SPA, Poznan, Poland, 26–28 September 2013. [Google Scholar]
  103. Kociołek, M.; Strzelecki, M.; Obuchowicz, R. Does Image Normalization and Intensity Resolution Impact Texture Classification? Comput. Med. Imaging Graph. 2020, 81, 101716. [Google Scholar] [CrossRef] [PubMed]
  104. Szczypiński, P.M.; Strzelecki, M.; Materka, A.; Klepaczko, A. Mazda—The Software Package for Textural Analysis of Biomedical Images. In Computers in Medical Activity; Kącki, E., Rudnicki, M., Stempczyńska, J., Eds.; Advances in Soft Computing; Springer: Berlin/Heidelberg, Germany, 2009; Volume 65, pp. 73–84. ISBN 978-3-642-04461-8. [Google Scholar]
  105. Reska, D.; Kretowski, M. GPU-Accelerated Image Segmentation Based on Level Sets and Multiple Texture Features. Multimed. Tools Appl. 2021, 80, 5087–5109. [Google Scholar] [CrossRef]
  106. Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A Self-Configuring Method for Deep Learning-Based Biomedical Image Segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  107. Li, W.; Qu, C.; Chen, X.; Bassi, P.R.A.S.; Shi, Y.; Lai, Y.; Yu, Q.; Xue, H.; Chen, Y.; Lin, X.; et al. AbdomenAtlas: A Large-Scale, Detailed-Annotated, & Multi-Center Dataset for Efficient Transfer Learning and Open Algorithmic Benchmarking. Med. Image Anal. 2024, 97, 103285. [Google Scholar] [CrossRef] [PubMed]
  108. Hering, A.; Hansen, L.; Mok, T.C.W.; Chung, A.C.S.; Siebert, H.; Hager, S.; Lange, A.; Kuckertz, S.; Heldmann, S.; Shao, W.; et al. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE Trans. Med. Imaging 2023, 42, 697–712. [Google Scholar] [CrossRef] [PubMed]
  109. Baheti, B.; Chakrabarty, S.; Akbari, H.; Bilello, M.; Wiestler, B.; Schwarting, J.; Calabrese, E.; Rudie, J.; Abidi, S.; Mousa, M.; et al. The Brain Tumor Sequence Registration (BraTS-Reg) Challenge: Establishing Correspondence Between Pre-Operative and Follow-up MRI Scans of Diffuse Glioma Patients. arXiv 2021, arXiv:2112.06979. [Google Scholar]
  110. Chen, X.; Diaz-Pinto, A.; Ravikumar, N.; Frangi, A. Deep Learning in Medical Image Registration. Prog. Biomed. Eng. 2020, 3, 012003. [Google Scholar] [CrossRef]
  111. Xiao, H.; Teng, X.; Liu, C.; Li, T.; Ren, G.; Yang, R.; Shen, D.; Cai, J. A Review of Deep Learning-Based Three-Dimensional Medical Image Registration Methods. Quant. Imaging Med. Surg. 2021, 11, 4895–4916. [Google Scholar] [CrossRef]
  112. Andrade, N.; Faria, F.A.; Cappabianco, F.A.M. A Practical Review on Medical Image Registration: From Rigid to Deep Learning Based Approaches. In Proceedings of the 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Parana, Brazil, 29 October–1 November 2018; pp. 463–470. [Google Scholar]
  113. Nurzynska, K.; Piórkowski, A.; Strzelecki, M.; Kociołek, M.; Banyś, R.P.; Obuchowicz, R. Differentiating Age and Sex in Vertebral Body CT Scans—Texture Analysis versus Deep Learning Approach. Biocybern. Biomed. Eng. 2024, 44, 20–30. [Google Scholar] [CrossRef]
  114. Mang, A.; Gholami, A.; Davatzikos, C.; Biros, G. PDE-Constrained Optimization in Medical Image Analysis. Optim. Eng. 2018, 19, 765–812. [Google Scholar] [CrossRef]
  115. Chen, J.; Liu, Y.; Wei, S.; Bian, Z.; Subramanian, S.; Carass, A.; Prince, J.L.; Du, Y. A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond. Med. Image Anal. 2025, 100, 103385. [Google Scholar] [CrossRef]
  116. Friedrich, P.; Frisch, Y.; Cattin, P.C. Deep Generative Models for 3D Medical Image Synthesis. arXiv 2024, arXiv:2410.17664. [Google Scholar]
  117. Huijben, E.M.C.; Terpstra, M.L.; Galapon, A., Jr.; Pai, S.; Thummerer, A.; Koopmans, P.; Afonso, M.; Van Eijnatten, M.; Gurney-Champion, O.; Chen, Z.; et al. Generating Synthetic Computed Tomography for Radiotherapy: SynthRAD2023 Challenge Report. Med. Image Anal. 2024, 97, 103276. [Google Scholar] [CrossRef] [PubMed]
  118. Cho, J.; Zakka, C.; Kaur, D.; Shad, R.; Wightman, R.; Chaudhari, A.; Hiesinger, W. MediSyn: Text-Guided Diffusion Models for Broad Medical 2D and 3D Image Synthesis. arXiv 2024, arXiv:2405.09806. [Google Scholar]
  119. Muckley, M.J.; Riemenschneider, B.; Radmanesh, A.; Kim, S.; Jeong, G.; Ko, J.; Jun, Y.; Shin, H.; Hwang, D.; Mostapha, M.; et al. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging 2021, 40, 2306–2317. [Google Scholar] [CrossRef]
  120. Bhadra, S.; Kelkar, V.A.; Brooks, F.J.; Anastasio, M.A. On Hallucinations in Tomographic Image Reconstruction. IEEE Trans. Med. Imaging 2021, 40, 3249–3260. [Google Scholar] [CrossRef] [PubMed]
  121. Roh, J.; Ryu, D.; Lee, J. CT Synthesis with Deep Learning for MR-Only Radiotherapy Planning: A Review. Biomed. Eng. Lett. 2024, 14, 1259–1278. [Google Scholar] [CrossRef] [PubMed]
  122. Chung, T.; Dillman, J.R. Deep Learning Image Reconstruction: A Tremendous Advance for Clinical MRI but Be Careful…. Pediatr. Radiol. 2023, 53, 2157–2158. [Google Scholar] [CrossRef] [PubMed]
  123. Reinke, A.; Tizabi, M.D.; Baumgartner, M.; Eisenmann, M.; Heckmann-Nötzel, D.; Kavur, A.E.; Rädsch, T.; Sudre, C.H.; Acion, L.; Antonelli, M.; et al. Understanding Metric-Related Pitfalls in Image Analysis Validation. Nat. Methods 2024, 21, 182–194. [Google Scholar] [CrossRef] [PubMed]
  124. Rädsch, T.; Reinke, A.; Weru, V.; Tizabi, M.D.; Heller, N.; Isensee, F.; Kopp-Schneider, A.; Maier-Hein, L. Quality Assured: Rethinking Annotation Strategies in Imaging AI. In Computer Vision—ECCV 2024; Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G., Eds.; Lecture Notes in Computer Science; Springer Nature Switzerland: Cham, Switzerland, 2025; Volume 15136, pp. 52–69. ISBN 978-3-031-73228-7. [Google Scholar]
  125. Christodoulou, E.; Reinke, A.; Houhou, R.; Kalinowski, P.; Erkan, S.; Sudre, C.H.; Burgos, N.; Boutaj, S.; Loizillon, S.; Solal, M.; et al. Confidence Intervals Uncovered: Are We Ready for Real-World Medical Imaging AI? In Medical Image Computing and Computer Assisted Intervention—MICCAI 2024; Linguraru, M.G., Dou, Q., Feragen, A., Giannarou, S., Glocker, B., Lekadir, K., Schnabel, J.A., Eds.; Lecture Notes in Computer Science; Springer Nature Switzerland: Cham, Switzerland, 2024; Volume 15010, pp. 124–132. ISBN 978-3-031-72116-8. [Google Scholar]
  126. Fernandez-Quilez, A. Deep Learning in Radiology: Ethics of Data and on the Value of Algorithm Transparency, Interpretability and Explainability. AI Ethics 2023, 3, 257–265. [Google Scholar] [CrossRef]
  127. Keel, B.; Quyn, A.; Jayne, D.; Relton, S.D. State-of-the-Art Performance of Deep Learning Methods for Pre-Operative Radiologic Staging of Colorectal Cancer Lymph Node Metastasis: A Scoping Review. BMJ Open 2024, 14, e086896. [Google Scholar] [CrossRef] [PubMed]
  128. Marey, A.; Arjmand, P.; Alerab, A.D.S.; Eslami, M.J.; Saad, A.M.; Sanchez, N.; Umair, M. Explainability, Transparency and Black Box Challenges of AI in Radiology: Impact on Patient Care in Cardiovascular Radiology. Egypt. J. Radiol. Nucl. Med. 2024, 55, 183. [Google Scholar] [CrossRef]
  129. Tsai, M.-J.; Lin, P.-Y.; Lee, M.-E. Adversarial Attacks on Medical Image Classification. Cancers 2023, 15, 4228. [Google Scholar] [CrossRef] [PubMed]
  130. Amirian, M.; Schwenker, F.; Stadelmann, T. Trace and Detect Adversarial Attacks on CNNs Using Feature Response Maps. In Artificial Neural Networks in Pattern Recognition; Pancioni, L., Schwenker, F., Trentin, E., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11081, pp. 346–358. ISBN 978-3-319-99977-7. [Google Scholar]
  131. Shao, R.; Shi, Z.; Yi, J.; Chen, P.-Y.; Hsieh, C.-J. On the Adversarial Robustness of Vision Transformers. arXiv 2021, arXiv:2103.15670. [Google Scholar]
  132. Mo, Y.; Wu, D.; Wang, Y.; Guo, Y.; Wang, Y. When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture. In Proceedings of the 36th International Conference on Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; Curran Associates Inc.: Red Hook, NY, USA, 2024. [Google Scholar]
  133. Health AI Register 2024. Available online: https://www.HealthAIregister.com (accessed on 28 August 2024).
  134. AI Central. Available online: https://aicentral.acrdsi.org/ (accessed on 3 September 2024).
  135. Grenier, P.A.; Brun, A.L.; Mellot, F. The Potential Role of Artificial Intelligence in Lung Cancer Screening Using Low-Dose Computed Tomography. Diagnostics 2022, 12, 2435. [Google Scholar] [CrossRef]
  136. Lauritzen, A.D.; Lillholm, M.; Lynge, E.; Nielsen, M.; Karssemeijer, N.; Vejborg, I. Early Indicators of the Impact of Using AI in Mammography Screening for Breast Cancer. Radiology 2024, 311, e232479. [Google Scholar] [CrossRef]
  137. Roca, P.; Attye, A.; Colas, L.; Tucholka, A.; Rubini, P.; Cackowski, S.; Ding, J.; Budzik, J.-F.; Renard, F.; Doyle, S.; et al. Artificial Intelligence to Predict Clinical Disability in Patients with Multiple Sclerosis Using FLAIR MRI. Diagn. Interv. Imaging 2020, 101, 795–802. [Google Scholar] [CrossRef]
  138. Van Leeuwen, K.G.; Becks, M.J.; Grob, D.; De Lange, F.; Rutten, J.H.E.; Schalekamp, S.; Rutten, M.J.C.M.; Van Ginneken, B.; De Rooij, M.; Meijer, F.J.A. AI-Support for the Detection of Intracranial Large Vessel Occlusions: One-Year Prospective Evaluation. Heliyon 2023, 9, e19065. [Google Scholar] [CrossRef] [PubMed]
  139. Adams, S.J.; Madtes, D.K.; Burbridge, B.; Johnston, J.; Goldberg, I.G.; Siegel, E.L.; Babyn, P.; Nair, V.S.; Calhoun, M.E. Clinical Impact and Generalizability of a Computer-Assisted Diagnostic Tool to Risk-Stratify Lung Nodules With CT. J. Am. Coll. Radiol. 2023, 20, 232–242. [Google Scholar] [CrossRef] [PubMed]
  140. Asif, A.; Charters, P.F.P.; Thompson, C.A.S.; Komber, H.M.E.I.; Hudson, B.J.; Rodrigues, J.C.L. Artificial Intelligence Can Detect Left Ventricular Dilatation on Contrast-Enhanced Thoracic Computer Tomography Relative to Cardiac Magnetic Resonance Imaging. Br. J. Radiol. 2022, 95, 20210852. [Google Scholar] [CrossRef] [PubMed]
  141. Arasu, V.A.; Habel, L.A.; Achacoso, N.S.; Buist, D.S.M.; Cord, J.B.; Esserman, L.J.; Hylton, N.M.; Glymour, M.M.; Kornak, J.; Kushi, L.H.; et al. Comparison of Mammography AI Algorithms with a Clinical Risk Model for 5-Year Breast Cancer Risk Prediction: An Observational Study. Radiology 2023, 307, e222733. [Google Scholar] [CrossRef] [PubMed]
  142. Gräfe, D.; Beeskow, A.B.; Pfäffle, R.; Rosolowski, M.; Chung, T.S.; DiFranco, M.D. Automated Bone Age Assessment in a German Pediatric Cohort: Agreement between an Artificial Intelligence Software and the Manual Greulich and Pyle Method. Eur. Radiol. 2023, 34, 4407–4413. [Google Scholar] [CrossRef]
  143. Eom, H.J.; Cha, J.H.; Choi, W.J.; Cho, S.M.; Jin, K.; Kim, H.H. Mammographic Density Assessment: Comparison of Radiologists, Automated Volumetric Measurement, and Artificial Intelligence-Based Computer-Assisted Diagnosis. Acta Radiol. 2024, 65, 708–715. [Google Scholar] [CrossRef] [PubMed]
  144. Habuza, T.; Navaz, A.N.; Hashim, F.; Alnajjar, F.; Zaki, N.; Serhani, M.A.; Statsenko, Y. AI Applications in Robotics, Diagnostic Image Analysis and Precision Medicine: Current Limitations, Future Trends, Guidelines on CAD Systems for Medicine. Inform. Med. Unlocked 2021, 24, 100596. [Google Scholar] [CrossRef]
  145. Tadavarthi, Y.; Vey, B.; Krupinski, E.; Prater, A.; Gichoya, J.; Safdar, N.; Trivedi, H. The State of Radiology AI: Considerations for Purchase Decisions and Current Market Offerings. Radiol. Artif. Intell. 2020, 2, e200004. [Google Scholar] [CrossRef]
  146. Wenderott, K.; Krups, J.; Luetkens, J.A.; Gambashidze, N.; Weigl, M. Prospective Effects of an Artificial Intelligence-Based Computer-Aided Detection System for Prostate Imaging on Routine Workflow and Radiologists’ Outcomes. Eur. J. Radiol. 2024, 170, 111252. [Google Scholar] [CrossRef]
  147. Jacobs, C.; Schreuder, A.; Van Riel, S.J.; Scholten, E.T.; Wittenberg, R.; Wille, M.M.W.; De Hoop, B.; Sprengers, R.; Mets, O.M.; Geurts, B.; et al. Assisted versus Manual Interpretation of Low-Dose CT Scans for Lung Cancer Screening: Impact on Lung-RADS Agreement. Radiol. Imaging Cancer 2021, 3, e200160. [Google Scholar] [CrossRef] [PubMed]
  148. Jimenez-Pastor, A.; Lopez-Gonzalez, R.; Fos-Guarinos, B.; Garcia-Castro, F.; Wittenberg, M.; Torregrosa-Andrés, A.; Marti-Bonmati, L.; Garcia-Fontes, M.; Duarte, P.; Gambini, J.P.; et al. Automated Prostate Multi-Regional Segmentation in Magnetic Resonance Using Fully Convolutional Neural Networks. Eur. Radiol. 2023, 33, 5087–5096. [Google Scholar] [CrossRef]
  149. Nampewo, I.; Ariana, P.; Vijayan, S. Benefits of Artificial Intelligence versus Human-Reader in Chest X-Ray Screening for Tuberculosis in the Philippines. Int. J. Health Sci. Res. 2024, 14, 277–287. [Google Scholar] [CrossRef]
  150. Pemberton, H.G.; Goodkin, O.; Prados, F.; Das, R.K.; Vos, S.B.; Moggridge, J.; Coath, W.; Gordon, E.; Barrett, R.; Schmitt, A.; et al. Automated Quantitative MRI Volumetry Reports Support Diagnostic Interpretation in Dementia: A Multi-Rater, Clinical Accuracy Study. Eur. Radiol. 2021, 31, 5312–5323. [Google Scholar] [CrossRef]
  151. Van Leeuwen, K.G.; Schalekamp, S.; Rutten, M.J.C.M.; Huisman, M.; Schaefer-Prokop, C.M.; De Rooij, M.; Van Ginneken, B.; Maresch, B.; Geurts, B.H.J.; Van Dijke, C.F.; et al. Comparison of Commercial AI Software Performance for Radiograph Lung Nodule Detection and Bone Age Prediction. Radiology 2024, 310, e230981. [Google Scholar] [CrossRef] [PubMed]
  152. Cooper, R.G. The AI Transformation of Product Innovation. Ind. Mark. Manag. 2024, 119, 62–74. [Google Scholar] [CrossRef]
  153. Gao, S.; Xu, Z.; Kang, W.; Lv, X.; Chu, N.; Xu, S.; Hou, D. Artificial Intelligence-Driven Computer Aided Diagnosis System Provides Similar Diagnosis Value Compared with Doctors’ Evaluation in Lung Cancer Screening. BMC Med. Imaging 2024, 24, 141. [Google Scholar] [CrossRef] [PubMed]
  154. Savage, C.H.; Elkassem, A.A.; Hamki, O.; Sturdivant, A.; Benson, D.; Grumley, S.; Tzabari, J.; Junck, K.; Li, Y.; Li, M.; et al. Prospective Evaluation of Artificial Intelligence Triage of Incidental Pulmonary Emboli on Contrast-Enhanced CT Examinations of the Chest or Abdomen. Am. J. Roentgenol. 2024, 223, e2431067. [Google Scholar] [CrossRef]
  155. Rothenberg, S.A.; Savage, C.H.; Abou Elkassem, A.; Singh, S.; Abozeed, M.; Hamki, O.; Junck, K.; Tridandapani, S.; Li, M.; Li, Y.; et al. Prospective Evaluation of AI Triage of Pulmonary Emboli on CT Pulmonary Angiograms. Radiology 2023, 309, e230702. [Google Scholar] [CrossRef]
  156. Chien, H.-W.C.; Yang, T.-L.; Juang, W.-C.; Chen, Y.-Y.A.; Li, Y.-C.J.; Chen, C.-Y. Pilot Report for Intracranial Hemorrhage Detection with Deep Learning Implanted Head Computed Tomography Images at Emergency Department. J. Med. Syst. 2022, 46, 49. [Google Scholar] [CrossRef] [PubMed]
  157. Schaffter, T.; Buist, D.S.M.; Lee, C.I.; Nikulin, Y.; Ribli, D.; Guan, Y.; Lotter, W.; Jie, Z.; Du, H.; Wang, S.; et al. Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms. JAMA Netw. Open 2020, 3, e200265. [Google Scholar] [CrossRef] [PubMed]
  158. Lauri, C.; Shimpo, F.; Sokołowski, M.M. Artificial Intelligence and Robotics on the Frontlines of the Pandemic Response: The Regulatory Models for Technology Adoption and the Development of Resilient Organisations in Smart Cities. J. Ambient. Intell. Human. Comput. 2023, 14, 14753–14764. [Google Scholar] [CrossRef] [PubMed]
  159. Justo-Hanani, R. The Politics of Artificial Intelligence Regulation and Governance Reform in the European Union. Policy Sci. 2022, 55, 137–159. [Google Scholar] [CrossRef]
  160. Smuha, N.A. From a ‘Race to AI’ to a ‘Race to AI Regulation’: Regulatory Competition for Artificial Intelligence. Law Innov. Technol. 2021, 13, 57–84. [Google Scholar] [CrossRef]
  161. Tanguay, W.; Acar, P.; Fine, B.; Abdolell, M.; Gong, B.; Cadrin-Chênevert, A.; Chartrand-Lefebvre, C.; Chalaoui, J.; Gorgos, A.; Chin, A.S.-L.; et al. Assessment of Radiology Artificial Intelligence Software: A Validation and Evaluation Framework. Can. Assoc. Radiol. J. 2023, 74, 326–333. [Google Scholar] [CrossRef] [PubMed]
  162. Wimpfheimer, O.; Kimmel, Y. Artificial Intelligence in Medical Imaging: An Overview of a Decade of Experience. Isr. Med. Assoc. J. 2024, 26, 122–125. [Google Scholar]
  163. Mehta, S.S. Commercializing Successful Biomedical Technologies: Basic Principles for the Development of Drugs, Diagnostics and Devices; Cambridge University Press: Cambridge, NY, USA, 2008; ISBN 978-0-521-87098-6. [Google Scholar]
  164. Wu, K.; Wu, E.; Theodorou, B.; Liang, W.; Mack, C.; Glass, L.; Sun, J.; Zou, J. Characterizing the Clinical Adoption of Medical AI through U.S. Insurance Claims. NEJM AI 2023, 1, AIoa2300030. [Google Scholar] [CrossRef]
  165. Fornell, D. FDA Has Now Cleared 700 AI Healthcare Algorithms, More Than 76% in Radiology. Health Imaging. 2023. Available online: https://healthimaging.com/topics/artificial-intelligence/fda-has-now-cleared-700-ai-healthcare-algorithms-more-76-radiology (accessed on 1 December 2024).
  166. MarketsandMarkets Artificial Intelligence (AI) in Healthcare Market by Offering (Hardware, Software, Services), Technology (Machine Learning, NLP, Context-Aware Computing, Computer Vision), Application, End User and Region—Global Forecast to 2028. 2023. Available online: https://www.marketsandmarkets.com/Market-Reports/ai-toolkit-market-252755052.html (accessed on 30 November 2024).
  167. Varghese, J. Artificial Intelligence in Medicine: Chances and Challenges for Wide Clinical Adoption. Visc. Med. 2020, 36, 443–449. [Google Scholar] [CrossRef]
  168. Chau, M. Ethical, Legal, and Regulatory Landscape of Artificial Intelligence in Australian Healthcare and Ethical Integration in Radiography: A Narrative Review. J. Med. Imaging Radiat. Sci. 2024, 55, 101733. [Google Scholar] [CrossRef] [PubMed]
  169. Zhu, S.; Gilbert, M.; Chetty, I.; Siddiqui, F. The 2021 Landscape of FDA-Approved Artificial Intelligence/Machine Learning-Enabled Medical Devices: An Analysis of the Characteristics and Intended Use. Int. J. Med. Inform. 2022, 165, 104828. [Google Scholar] [CrossRef] [PubMed]
  170. Ibragimov, B.; Arzamasov, K.; Maksudov, B.; Kiselev, S.; Mongolin, A.; Mustafaev, T.; Ibragimova, D.; Evteeva, K.; Andreychenko, A.; Morozov, S. A 178-Clinical-Center Experiment of Integrating AI Solutions for Lung Pathology Diagnosis. Sci. Rep. 2023, 13, 1135. [Google Scholar] [CrossRef]
  171. Robert, D.; Sathyamurthy, S.; Singh, A.K.; Matta, S.A.; Tadepalli, M.; Tanamala, S.; Bosemani, V.; Mammarappallil, J.; Kundnani, B. Effect of Artificial Intelligence as a Second Reader on the Lung Nodule Detection and Localization Accuracy of Radiologists and Non-Radiology Physicians in Chest Radiographs: A Multicenter Reader Study. Acad. Radiol. 2024, in press. [Google Scholar] [CrossRef]
  172. Saha, A.; Bosma, J.S.; Twilt, J.J.; van Ginneken, B.; Bjartell, A.; Padhani, A.R.; Bonekamp, D.; Villeirs, G.; Salomon, G.; Giannarini, G.; et al. Artificial Intelligence and Radiologists in Prostate Cancer Detection on MRI (PI-CAI): An International, Paired, Non-Inferiority, Confirmatory Study. Lancet Oncol. 2024, 25, 879–887. [Google Scholar] [CrossRef]
  173. Wang, T.-W.; Hsu, M.-S.; Lee, W.-K.; Pan, H.-C.; Yang, H.-C.; Lee, C.-C.; Wu, Y.-T. Brain Metastasis Tumor Segmentation and Detection Using Deep Learning Algorithms: A Systematic Review and Meta-Analysis. Radiother. Oncol. 2024, 190, 110007. [Google Scholar] [CrossRef]
  174. Lu, S.-L.; Xiao, F.-R.; Cheng, J.C.-H.; Yang, W.-C.; Cheng, Y.-H.; Chang, Y.-C.; Lin, J.-Y.; Liang, C.-H.; Lu, J.-T.; Chen, Y.-F.; et al. Randomized Multi-Reader Evaluation of Automated Detection and Segmentation of Brain Tumors in Stereotactic Radiosurgery with Deep Neural Networks. Neuro Oncol. 2021, 23, 1560–1568. [Google Scholar] [CrossRef] [PubMed]
  175. Salehi, M.A.; Mohammadi, S.; Harandi, H.; Zakavi, S.S.; Jahanshahi, A.; Shahrabi Farahani, M.; Wu, J.S. Diagnostic Performance of Artificial Intelligence in Detection of Primary Malignant Bone Tumors: A Meta-Analysis. J. Imaging Inform. Med. 2024, 37, 766–777. [Google Scholar] [CrossRef]
  176. Bachmann, R.; Gunes, G.; Hangaard, S.; Nexmann, A.; Lisouski, P.; Boesen, M.; Lundemann, M.; Baginski, S.G. Improving Traumatic Fracture Detection on Radiographs with Artificial Intelligence Support: A Multi-Reader Study. BJR|Open 2023, 6, tzae011. [Google Scholar] [CrossRef]
  177. Jalal, S.; Parker, W.; Ferguson, D.; Nicolaou, S. Exploring the Role of Artificial Intelligence in an Emergency and Trauma Radiology Department. Can. Assoc. Radiol. J. 2021, 72, 167–174. [Google Scholar] [CrossRef]
  178. Ketola, J.H.J.; Inkinen, S.I.; Mäkelä, T.; Syväranta, S.; Peltonen, J.; Kaasalainen, T.; Kortesniemi, M. Testing Process for Artificial Intelligence Applications in Radiology Practice. Phys. Med. 2024, 128, 104842. [Google Scholar] [CrossRef]
Figure 1. Deep learning models applicable for radiological data processing.
Figure 1. Deep learning models applicable for radiological data processing.
Diagnostics 15 00282 g001
Figure 2. Detailed comparison of human radiologists and AI models, emphasizing the unique strengths each brings to medical imaging tasks.
Figure 2. Detailed comparison of human radiologists and AI models, emphasizing the unique strengths each brings to medical imaging tasks.
Diagnostics 15 00282 g002
Figure 3. Necessary steps in medical imaging data preparation for processing with AI.
Figure 3. Necessary steps in medical imaging data preparation for processing with AI.
Diagnostics 15 00282 g003
Figure 4. CT image of a brain cross-section with a marked stroke area (A); AngularSecondMoment feature map calculated for this image, in which a 15 × 15-pixel window was used (B).
Figure 4. CT image of a brain cross-section with a marked stroke area (A); AngularSecondMoment feature map calculated for this image, in which a 15 × 15-pixel window was used (B).
Diagnostics 15 00282 g004
Figure 5. T1W MR image of the foot bone (A); feature maps calculated for this image: SumAverage (B); SumOfSquares (C); and Sigma (D).
Figure 5. T1W MR image of the foot bone (A); feature maps calculated for this image: SumAverage (B); SumOfSquares (C); and Sigma (D).
Diagnostics 15 00282 g005
Figure 6. A bar graph illustrating the growth in the number of working AI algorithms and certified products between 2021 and October 2024. The annotations highlight the counts for both years and the percentage increases, emphasizing the rapid expansion of AI solutions in healthcare.
Figure 6. A bar graph illustrating the growth in the number of working AI algorithms and certified products between 2021 and October 2024. The annotations highlight the counts for both years and the percentage increases, emphasizing the rapid expansion of AI solutions in healthcare.
Diagnostics 15 00282 g006
Figure 7. A bar chart illustrating the distribution of AI-based products across various medical subspecialties. The x-axis lists the different subspecialties, while the y-axis indicates the number of AI products available in each area.
Figure 7. A bar chart illustrating the distribution of AI-based products across various medical subspecialties. The x-axis lists the different subspecialties, while the y-axis indicates the number of AI products available in each area.
Diagnostics 15 00282 g007
Figure 8. A bar chart illustrating the distribution of AI-based products across various imaging modalities. The x-axis represents different imaging modalities—CT, MR, X-ray, mammography, ultrasound, PET, and SPECT—while the y-axis shows the number of AI products available for each modality.
Figure 8. A bar chart illustrating the distribution of AI-based products across various imaging modalities. The x-axis represents different imaging modalities—CT, MR, X-ray, mammography, ultrasound, PET, and SPECT—while the y-axis shows the number of AI products available for each modality.
Diagnostics 15 00282 g008
Figure 9. A bar chart illustrating the distribution of AI products with regard to their top 16 targeted diseases. The x-axis lists the targeted diseases, while the y-axis represents the number of AI products developed for each condition.
Figure 9. A bar chart illustrating the distribution of AI products with regard to their top 16 targeted diseases. The x-axis lists the targeted diseases, while the y-axis represents the number of AI products developed for each condition.
Diagnostics 15 00282 g009
Figure 10. A bar chart displaying the distribution of CE certifications across various AI products. The x-axis represents different certification categories, while the y-axis shows the number of products in each category.
Figure 10. A bar chart displaying the distribution of CE certifications across various AI products. The x-axis represents different certification categories, while the y-axis shows the number of products in each category.
Diagnostics 15 00282 g010
Figure 11. A bar chart illustrating the distribution of AI products by the market entry year. The x-axis represents the market entry years, ranging from “Before 2014” to 2024, while the y-axis shows the number of products that entered the market in each year.
Figure 11. A bar chart illustrating the distribution of AI products by the market entry year. The x-axis represents the market entry years, ranging from “Before 2014” to 2024, while the y-axis shows the number of products that entered the market in each year.
Diagnostics 15 00282 g011
Figure 12. A bar chart illustrating the annual number of AI products cleared for use in radiology between 2008 and 2024. The trend reached its peak in 2023, with over 80 products cleared that year, reflecting the rapid growth and adoption of AI technologies in healthcare during this period. The year 2024 shows a slight decline, potentially signaling market stabilization or shifts in regulatory processes. The progression depicted here highlights the increasing integration of AI solutions into clinical practice, particularly in medical imaging.
Figure 12. A bar chart illustrating the annual number of AI products cleared for use in radiology between 2008 and 2024. The trend reached its peak in 2023, with over 80 products cleared that year, reflecting the rapid growth and adoption of AI technologies in healthcare during this period. The year 2024 shows a slight decline, potentially signaling market stabilization or shifts in regulatory processes. The progression depicted here highlights the increasing integration of AI solutions into clinical practice, particularly in medical imaging.
Diagnostics 15 00282 g012
Table 1. Detailed comparison of human radiologists vs. AI models [78,79,80,81,82,83,84,85,86,87].
Table 1. Detailed comparison of human radiologists vs. AI models [78,79,80,81,82,83,84,85,86,87].
FactorRadiologistsAI Models
Data Processing VolumeModerateHigh
Connections (Trillions)803
AdaptabilityHighLow
Perception of PatternsHighModerate
ConsistencyModerateHigh
Speed of AnalysisModerateHigh
Fatigue ResistanceNoYes
Bias ResistanceNoYes
Training Techniques RequiredNoYes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Obuchowicz, R.; Lasek, J.; Wodziński, M.; Piórkowski, A.; Strzelecki, M.; Nurzynska, K. Artificial Intelligence-Empowered Radiology—Current Status and Critical Review. Diagnostics 2025, 15, 282. https://doi.org/10.3390/diagnostics15030282

AMA Style

Obuchowicz R, Lasek J, Wodziński M, Piórkowski A, Strzelecki M, Nurzynska K. Artificial Intelligence-Empowered Radiology—Current Status and Critical Review. Diagnostics. 2025; 15(3):282. https://doi.org/10.3390/diagnostics15030282

Chicago/Turabian Style

Obuchowicz, Rafał, Julia Lasek, Marek Wodziński, Adam Piórkowski, Michał Strzelecki, and Karolina Nurzynska. 2025. "Artificial Intelligence-Empowered Radiology—Current Status and Critical Review" Diagnostics 15, no. 3: 282. https://doi.org/10.3390/diagnostics15030282

APA Style

Obuchowicz, R., Lasek, J., Wodziński, M., Piórkowski, A., Strzelecki, M., & Nurzynska, K. (2025). Artificial Intelligence-Empowered Radiology—Current Status and Critical Review. Diagnostics, 15(3), 282. https://doi.org/10.3390/diagnostics15030282

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop