Next Article in Journal
Temporal Bone Fractures and Related Complications in Pediatric and Adult Cranio-Facial Trauma: A Comparison of MDCT Findings in the Acute Emergency Setting
Previous Article in Journal
Advancements in Neurosurgical Intraoperative Histology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Artificial Intelligence in Breast Imaging

by
Dhurgham Al-Karawi
1,*,
Shakir Al-Zaidi
1,
Khaled Ahmad Helael
2,
Naser Obeidat
3,
Abdulmajeed Mounzer Mouhsen
3,
Tarek Ajam
3,
Bashar A. Alshalabi
3,
Mohamed Salman
3 and
Mohammed H. Ahmed
4
1
Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK
2
Royal Medical Services, King Hussein Medical Hospital, King Abdullah II Ben Al-Hussein Street, Amman 11855, Jordan
3
Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan
4
School of Computing, Coventry University, 3 Gulson Road, Coventry CV1 5FB, UK
*
Author to whom correspondence should be addressed.
Tomography 2024, 10(5), 705-726; https://doi.org/10.3390/tomography10050055
Submission received: 5 March 2024 / Revised: 14 April 2024 / Accepted: 6 May 2024 / Published: 9 May 2024

Abstract

:
With the increasing dominance of artificial intelligence (AI) techniques, the important prospects for their application have extended to various medical fields, including domains such as in vitro diagnosis, intelligent rehabilitation, medical imaging, and prognosis. Breast cancer is a common malignancy that critically affects women’s physical and mental health. Early breast cancer screening—through mammography, ultrasound, or magnetic resonance imaging (MRI)—can substantially improve the prognosis for breast cancer patients. AI applications have shown excellent performance in various image recognition tasks, and their use in breast cancer screening has been explored in numerous studies. This paper introduces relevant AI techniques and their applications in the field of medical imaging of the breast (mammography and ultrasound), specifically in terms of identifying, segmenting, and classifying lesions; assessing breast cancer risk; and improving image quality. Focusing on medical imaging for breast cancer, this paper also reviews related challenges and prospects for AI.

1. Introduction

Breast cancer is one of the most common cancers identified in women across the globe, and it is now the leading cause of death among women [1,2,3]. In addition to being the most commonly diagnosed cancer in 154 countries (out of 185 countries), breast cancer is the main cause of cancer-related death in more than 100 countries [4]. Approximately 600 men and 40,000 women die of breast cancer annually according to the findings of the American Cancer Society [2]. Therefore, early screening and treatment of breast cancer is a global health concern. Accurate diagnosis of breast cancer, especially early detection and treatment, can critically affect its prognosis. The clinical cure rate of breast cancer in the early stage is highly optimistic and exceeds 90%. Meanwhile, the clinical cure rate in the middle stage ranges from 50% to 70%, and treatment is typically not effective for those in the late stage. Mammography, ultrasound, and MRI are crucial screening and supplementary diagnostic tools that serve as significant methods for the detection and staging of breast cancer, evaluation of treatment efficacy, and follow-up examination [5]. Breast cancer can be classified as benign, normal, in situ carcinoma, or invasive carcinoma [3]. A benign tumor will cause a slight alteration to the breast anatomy; however, there is no harmful substance, so it cannot be referred to as a dangerous tumor [6]. On the other hand, in situ carcinomas affect only the mammary duct lobules and do not spread to other tissues [7]. As long as it is detected at an early stage, this type of cancer is not very dangerous and can be treated. Invasive carcinoma is the most severe form of breast cancer, as it can spread to every organ in the body [8].
AI refers to the capability of a computer system to accurately interpret and learn from external data and to flexibly adapt obtained knowledge to execute specific tasks. The remarkable advancements in computational functions related to the rise of big data over the past five decades have propelled the application of AI into new domains [9]. For example, AI is available for face and voice recognition, among other new technologies. The use of AI methods in medical imaging has gained increasing research interest. Significant progress has been made in applying AI algorithms, particularly deep learning (DL) algorithms, to image recognition tasks. The availability of various methods in the field of medical image analysis, ranging from convolutional neural networks (CNNs) to variational autoencoders, has contributed to rapid developments in medical imaging [10]. Furthermore, various types of cancers, such as ovarian cancer, can be diagnosed using AI and machine learning (ML) tools [11,12,13,14].
Radiologists read and analyze breast images and use them for diagnosis. However, their substantial workload and long hours can cause fatigue, leading to misjudgment and misdiagnoses or missed diagnoses. Using AI, or, in this case, computer-aided diagnosis (CAD), for this purpose can alleviate potential human errors. In a CAD system, a suitable algorithm completes the processing and analysis of images [15,16]. Recent breakthroughs with DL in AI, especially CNNs, have significantly advanced the field of medical imaging [17,18]. AI refers to the capability of applications or machines to replicate humans (or human brain functions) in order to learn and solve problems [19]. The concept of AI was introduced in 1956 by John McCarthy. AI technologies have made remarkable progress since then, especially over the past decade. As AI has become a key part of computer science, there have been continuous efforts to create new kinds of intelligent machines that can imitate the human brain, extending to diverse applications such as image recognition, data mining, expert systems, natural language processing, language recognition, pattern recognition, and robotics [20]. In the medical field, AI can be used for early disease prediction, disease screening, clinical decision support, health management, hospital management, medical imaging, and medical record or literature analysis. Apart from assisting doctors in making accurate diagnoses, AI can be used to analyze medical images and information for disease screening and prediction. Focusing on the application of AI in breast imaging, Al-antari et al. reported high accuracy for a complete integrated CAD system (>92%) in the detection, segmentation, and classification of masses seen on mammograms [21]. Based on 2654 exams and readings by 101 radiologists, Rodriguez-Ruiz et al. reported a significant reduction in radiologists’ workload (of 17%) when using a trained AI system with automatic preselection (with an AI score of 2 as the threshold when scoring the possibility of cancer from 1 to 10) [22].
One of the most notable ways AI is used is in the implementation of ML, which consists of unsupervised and supervised ML techniques. Unsupervised ML does not require any information to be provided or determined by a set of historical imaging data, similar to the data under study, in order to categorize radiomic features, as shown in Figure 1. Supervised ML involves training using available data labeled with corresponding correct outputs.
The selected method must reach a balance between its ability to fit the training set with its power to generalize in the case of new data; until then, all parameters in the algorithm are subjected to tuning. For instance, sparsity-enhancing regularization networks can concurrently make predictions and recognize extracted features that significantly affect predictions [22]. ML includes computational algorithms, such as artificial neural networks (ANNs), decision trees, k-means, linear regression, principal component analysis (PCA), random forests, and support vector machines (SVMs), which utilize the image features extracted by radiomics as input to produce output predictions concerning disease outcomes. As AI systems are based on neural networks, DL approaches involve developing models that can imitate the human brain, which is the most recent technology used for image classification. A simulation model known as a perceptron was the first model to be used to simulate neural networks and the human brain. A neural network has a variable number of layers. The input layer processes the multi-dimensional data used as input. The hidden layer consists of several layers. In a convolutional layer, a feature map is created and passed through a non-linear activation function before it reaches the pooling layer for down sampling. The output can then be transferred to a fully connected layer for classification. Finally, the output layer yields the results of the analysis. For more complex problem solving, layers with perceptrons with all nodes fully connected can be created and arranged to form a multi-layer perceptron [23]. In addition, CNNs can be either supervised or unsupervised learning models. Supervised learning is a model-training procedure that requires the observed training data and the associated ground truth labels for the data (also described as “objects”). In unsupervised learning, there are no diagnoses or normal/abnormal labels for the training data. Supervised learning is typically considered for image classification tasks [24]. This paper discusses AI (DL and ML) and its applications in medical imaging of the breast (mammography and ultrasound), as well as the challenges and prospects related to applying AI techniques to medical imaging.

2. Public Breast Cancer Datasets

This section presents some of the most common public datasets in both mammogram and ultrasound images. Researchers frequently employ machine learning techniques to analyze and derive insights from these datasets, facilitating advancements in the field of medical imaging. These datasets collectively contribute to the advancement of machine learning techniques in mammography, fostering innovation in breast cancer detection and diagnosis.

2.1. Public Mammography Datasets

In the realm of mammography research, several publicly available datasets serve as crucial resources for advancing machine learning techniques. These datasets offer annotated information and diverse images, enabling researchers to develop and evaluate algorithms effectively. Here, we highlight some of the prominent public mammography datasets:
MIAS (Mammographic Image Analysis Society) [25]: This dataset comprises over 300 screening mammograms annotated with information about background tissue type, abnormalities present in the breast, and their severity. Additionally, lesions in mammograms are marked with X and Y coordinates, along with labels for various abnormalities.
DDSM (Digital Database of Screening Mammography) [26]: With over 2600 scanned film mammography studies, the DDSM provides a comprehensive archive. A subset of the DDSM, known as the CBIS-DDSM, offers well-annotated images with detailed pathological information such as breast mass type, tumor grade, and stage.
INBreast Database [27]: Despite recent discontinuation of support by the Universidade do Porto, INBreast remains a valuable resource, with over 410 screening mammographs. It includes information on abnormality types and mass contour data.
BCDR (Breast Cancer Digital Repository) [28]: This dataset focuses on full-field digital mammograms (FFDMs) and encourages contributions from researchers [29]. It provides images in both craniocaudal and medio-oblique views, accessible after registration.
BancoWeb LAPIMO [30]: Comprising over 1400 mammograms, this dataset offers images in TIFF format collected from 320 subjects. It includes a variety of lesions classified as benign, malignant, or healthy.
VICTRE Trial dataset [31]: Unlike the others, this dataset is entirely synthetic, simulating 2986 subjects with representative breast sizes and densities. It uses in-silico versions of digital breast tomosynthesis (DBT) and provides open-source software tools for analysis and lesion insertion [32].
OPTIMAM dataset [31]: Available upon request from the University of Surrey, OPTIMAM offers relational data storage. It contains annotated 3D DBT imaging and provides an open-source Python package for easy integration into research systems, facilitating seamless processing.

2.2. Public Ultrasound Datasets

These datasets stand as pivotal resources frequently utilized by researchers in machine learning. Each dataset offers a distinct collection of breast ultrasound images, catering to various research needs.
BUS [33]: Sourced from the UDIAT Diagnostic Centre of the Parc Tauli Corporation in Sabadell, Spain, BUS comprises 163 breast ultrasound images. Among these, 109 depict benign conditions, while 54 exhibit malignant characteristics.
BUSI [34]: Gathered from the Baheya Hospital for Early Detection and Treatment of Women’s Cancer in Cairo, Egypt, BUSI features a more extensive collection. This dataset encompasses ultrasound images obtained from 600 female patients aged between 25 and 75 years. It comprises 437 benign images, 210 malignant images, and 133 images depicting normal breast conditions, totaling 730 ultrasound images.
BUSIS [35]: Originating from the Second Affiliated Hospital of Harbin Medical University, the Affiliated Hospital of Qingdao University, and the Second Hospital of Hebei Medical University, BUSIS consists of 562 images depicting female subjects aged 26 to 78 years. Notably, these datasets may contain multiple images representing the same patient.
Regarding image labels, both BUS and BUSI provide lesion shape labels along with classifications distinguishing between benign and malignant conditions. In contrast, BUSIS solely offers lesion shape labels without further classification details. Figure 2 illustrates mammogram and ultrasound images from public datasets.

3. Applying AI to Mammography

Breast cancer screening methods often involve the use of mammography [36]. This non-invasive screening method is easy to operate and yields high-resolution images with good repeatability. It is relatively pain-free and does not discriminate by age or body type. The retained images can also be compared (i.e., before vs. after). Breast masses that cannot be detected by touch, benign lesions, and malignant breast tumors can be accurately detected using mammograms. Mammography involves full-field digital mammography (DM) systems and yields for-processing (raw imaging data) and for-presentation (post-processed versions of raw data) image formats [37]. The use of AI for detecting, classifying, and segmenting breast masses, assessing breast cancer risk, and improving image quality are discussed in the following subsections (see Table 1).

3.1. Detection and Classification of Breast Masses

One of the most common symptoms of breast cancer is the presence of masses. Detecting and diagnosing such abnormalities can be challenging, especially in dense breasts, due to their different shapes, sizes, and margins. This is why breast mass detection is a pivotal step in CAD. Several studies have recommended crow search optimization based on an intuitive fuzzy clustering approach with neighborhood attraction (CrSA-IFCM-NA). Its effectiveness in separating masses in mammogram images and good outcomes on cluster validity indices, indicating precise region segmentation, has been proven [38]. Other studies recommend using a completely integrated CAD system comprising a regional DL approach (you only look once, YOLO), a new deep network model of a full-resolution convolutional network (FrCN), and a deep CNN for detection, segmentation, and classification of masses in mammograms, respectively. Using the INbreast dataset (2022), these studies reported remarkable detection accuracy of 98.96%, suggesting the potential for this system to assist radiologists in effectively and accurately diagnosing these masses [39,40].
Ertosun et al. [41] presented a CNN-based visual search approach designed to localize masses within mammograms. Their model comprises two key sub-modules: an anomaly detector followed by a mass localizer. Initially, the anomaly detector discerns whether a mammogram contains masses, subsequently channeling mass-containing images into the localizer for precise localization. These modules, leveraging hierarchical CNN layers, undergo training on a dataset exceeding 2500 images sourced from DDSM. Al-Masni et al. [42] opted for the you only look once (YOLO) deep network introduced by Redmon et al. [43] for simultaneous mass detection and classification. YOLO employs end-to-end learning, employing successive convolutional layers to segment the image into sub-regions, followed by placing bounding boxes around significant objects and assigning class labels. Following five-fold cross-validation, the authors reported impressive sensitivity (100%) and specificity (94%) scores on the INBreast dataset.
Table 1. An overview of key AI studies in mammography.
Table 1. An overview of key AI studies in mammography.
Ref.MethodApplicationDataset SizeAccuracy
[37]DoG and HoGMicrocalcification cluster detection 373 cases-
[41]CNNClassification engine and a localization engine2420 cases85%
[42]YoLo-basedDetection600 cases 99.7%
[44]Fast-RCNNDetectionDDSM 2620 cases, SU-D 847 cases, INbreast 115 cases-
[45]Deep multi-instance networksClassification410 cases90%
[46]CBRClassification2620 cases91.34%
[47]CNNClassificationINbreast 89 cases, MCA 49 case90%
[48]CNN features + MSVMClassification416 cases90%
[49]Deep fusion learningClassification208 cases89.06%
[50]Fuzzy contoursSegmentation57 cases88.08%
[51]Mesh-free + SVMSegmentation322 cases94.77%
[52]Dense U-Net + AGsSegmentationD-A 186 cases, D-B 163 cases 78.38%
[53]Mask-RCNNs + GCNNsSegmentationMIAS 58 cases, DDSM 200 cases99.01%
[54]CNNSegmentation885 cases91%
[55]Densely connected U-Net with attention gates (AGs)Segmentation400 cases78.38%
[56]Mask-RCNN with GCNNSegmentationMIAS 322 cases and INbreast 115 cases 99.1%
[57]CLAHE and CNNImage enhancementDDSM 6000 cases, ZMDS 1739 cases 85.5%
[58]FADHECAL and FCISImage enhancementDDSM 2620 cases, MIAS 322 cases-
[59]LH and FEFImage enhancement97 cases-
Ribli et al. [44] deployed a Faster R-CNN approach, achieving a commendable AUC of 0.95 on the INBreast dataset. Notably, they contribute to reproducibility by open-sourcing their implementation on GitHub, a rare practice in the field. To enhance model robustness, the authors train on DDSM data and evaluate on INBreast, demonstrating domain generalizability. Platania et al. [45] employed pretrained CNN weights to initialize a binary classifier’s weights in a semi-supervised manner. Their approach involves a two-module system, where a YOLO-inspired CNN detects regions of interest (ROIs). Subsequently, the initial detector’s weights are transferred to an FFDM classifier, which undergoes training on the entire mammogram image. Post-testing on DDSM, the authors report an AUC score of approximately 92.3% and an accuracy of 93.5%.
In another work [46], the authors proposed a novel framework for detecting breast cancer in mammogram images. The images are classified in an explainable manner using the proposed solution. In particular, a classification approach based on case-based reasoning (CBR) was used. The quality of the extracted features strongly influences the accuracy of CBR. Therefore, a pipeline was developed to improve the quality of extracted features and provide a final diagnosis. Mammogram images are segmented using a U-Net architecture, which is an efficient method for extracting regions of interest (RoIs). A mammogram segmented with DL provides accurate results, while a mammogram classified with CBR provides accurate and explainable results. This approach outperformed some famous ML and DL methods on the curated breast imaging subset of the digital database for screening mammography (CBIS-DDSM) (by 86.71% and 91.34%, respectively). A shallow–deep CNN was proposed by Gao et al. [47] to classify masses as benign or cancerous based on mammography images. Low-energy images are recombined using shallow CNNs, and unique features are extracted from these images using deep CNNs. Using their proposed technique, they achieved an accuracy of 90%.
A hybrid technique for breast cancer diagnosis was proposed and tested in [48]. Three DL CNN models were applied as feature extractors in this study: Inception-V3, ResNet50, and AlexNet. The term variance (TV) feature selection algorithm was used in this method in order to extract useful features from the CNN models. TV in statistics is a measure of the spread or dispersion of a set of values. It quantifies how far each data point in the set is from the mean (average) and provides insight into the variability or volatility of the data. After the TV-selected CNN features are combined, further selection is performed to determine which features are most useful, and those features are then passed to a multi-class support vector machine (MSVM). The Mammographic Image Analysis Society (MIAS) image database, specifically the mini-Mammographic Image Analysis Society (mini-MIAS) dataset, was used to test the effectiveness of the suggested method. Patches were assigned to the RoIs of the mammograms. After testing various TV feature subsets, the 600-feature subset with the best classification performance was identified. Compared with previous studies, this work achieved a higher classification accuracy (CA). Typical CA was 97.81% for 70% of data for training, 98% for 80% of data for training, and 99% for 90% of data for training. As a final step, ablation analysis was performed to emphasize the key parameters of the proposed network.
Yu, X et al. [49] investigated the discriminative patterns between normal and tumor categories based on deep fusion learning. The framework for mammographic image classification using deep fusion learning includes two steps. The proposed deep fusion models are first trained on RoI patches randomly chosen from all RoIs in the original dataset once they have been obtained. The authors developed a deep fusion model (Model 1) to classify the RoI patches in normal and tumor tissues. Another model (Model 2) integrates cross-channel deep features using one-to-one convolution to explore associations between channels of the same block. Patches that predict a majority vote make up one RoI and the final prediction. Model 1 achieved a recall rate of 0.913, a precision rate of 0.8077, and an overall accuracy rate of 0.8906, while Model 2 achieved an overall accuracy of 0.875, a recall rate of 0.9565, and a precision rate of 0.7586 for the tumor class data.

3.2. Segmentation of Breast Masses

The effectiveness of treatment directly depends on the correctness of the segmentation of masses. The use of fuzzy contours has been recommended for automatic segmentation of breast masses. One study recorded high average true positive (91.12%) and accuracy (88.08%) rates for RoIs extracted from the mini-MIAS dataset [50]. The low contrast of mammogram images, irregular shapes of masses, spiculated margins, and varying pixel intensities contribute to the complexity of global segmentation of masses in mammograms. In another study, the evolved level set function for segmentation of the breast and suspicious mass regions was explored using a mesh-free radial basis function collocation approach, and suspicious mass regions were classified into abnormal and normal using an SVM classifier, achieving high sensitivity (97.12%) and specificity (92.43%) on the DDSM [51]. Accurate segmentation of breast lesions ensures accurate disease classification and diagnosis [52]. Such automatic image segmentation algorithms demonstrate the promising potential for DL in precision medical systems.
A new segmentation method for tumor mammograms is presented in [53], which extracts both the spiculated regions and the mass core. There is a general linear pattern in the arrangement of pixels in spiculated regions, which is mirrored in mass core regions where the pixels are also linear. The proposed method extracts these regions based on the differences between adjacent pixels. In the mass core and spiculated regions, redundant pixels can be deleted using three thresholds; segmented tumors are then formed by merging these regions. This method presented a mean Dice coefficient of 0.9309 on the mini-MIAS dataset, and mean Jaccard coefficients of 0.9557 and 0.9132 on the DDSM. According to the results, when compared with other techniques, the proposed segmentation technique can accurately extract tumor segments.
Salama and Aly [54] collected images from the DDSM, the mini-MIAS dataset, and the CBIS-DDSM. They used a variety of models to segment and classify the images as benign or malignant, including DenseNet121, InceptionV3, VGG16, ResNet50, and MobileNetV2. When using InceptionV3 with data augmentation, the best accuracy was 88.87%.
Li et al. [55] proposed a fully automatic method combining densely connected U-Nets with attention gates (AGs) for breast mass segmentation on mammogram images. It consists of an encoder and a decoder: Convolutional networks encode information, and U-Nets integrated with AGs decode it. The DDSM, an authorized public screening database, was used to test the proposed method. The F1-score, meaning the intersection of union, sensitivity, specificity, and overall accuracy, was used to evaluate the effectiveness of the method. Compared with U-Net, attention U-Net, DenseNet, and state-of-the-art segmentation techniques, dense U-Net integrated with AGs outperformed the other methods and achieved better segmentation results, with an accuracy of 78.38% and an F1-score of 78.24%. Additionally, the network has a lower standard deviation, suggesting that it is more capable of generalization.
The results of the study reported in [56] demonstrated highly accurate breast cancer segmentation by combining mask regional CNNs (mask R-CNNs) with group CNNs (G-CNNs). These approaches maximize the share of weights and the expressive capacity of the model while maintaining rotational invariance. The INbreast and mini-MIAS datasets were used to test the model. Comparing the results with those of conventional architectures, the model achieved a 99.01% accuracy, a Dice coefficient of 86.63%, a Jaccard index of 87.76%, 99.24% sensitivity, and 98.55% specificity.

3.3. Image Quality Improvement

The accuracy of a diagnosis largely depends on the image quality. Good image quality means clear images, which significantly improve the diagnosis and accuracy rates of AI models when detecting and diagnosing microscopic lesions in mammograms. Various computer algorithms have been developed to enhance image quality. Higher image quality offers more information on the phase, directionality, and shift invariance of the data. In this regard, multi-scale shearlet transforms can produce multi-resolution results for the detection of cancer cells, particularly those with smaller contours. Shenbagavalli et al. reported that benign and malignant cases in the DDSM were classified with an accuracy of up to 93.45% using the shearlet transform image enhancement method, suggesting its effectiveness in enhancing mammogram image quality [15]. Teare et al. used a novel form of a false-color enhancement method to optimize the characteristics of mammograms through contrast-limited adaptive histogram equalization (CLAHE), and they utilized dual deep CNNs at different scales to classify the images and derivative patches, along with a random forest gating network. In this way, they achieved a sensitivity of 0.91 and a specificity of 0.80 [57]. Given the significance of image quality for accurate diagnosis, rigorous image quality evaluation and improvement are essential for subsequent analysis and diagnosis by ANN systems and radiologists.
To reduce the noise of mammogram images while preserving contrast and brightness, Suradi et al. [58] developed the fuzzy anisotropic diffusion histogram equalization contrast adaptive limited (FADHECAL) technique. In addition to the FADHECAL technique, a fuzzy clipped inference system (FCIS) is applied during the enhancement process, automatically selecting the clip limit from the available options. The DDSM and mini-MIAS dataset were used to access the mammogram images. The outcomes show that the FADHECAL technique had better results than the other selected enhancement methods, with AMBE = 6.502 ± 1.855, SSIM = 0.934 ± 0.034, MAE = 15.742 ± 1.217, PSNR = 26.843 ± 2.541, UIQI = 0.969 ± 0.021, and RMSE = 1.151 ± 0.147. The FADHECAL technique can be used to enhance mammogram images to detect breast cancer lesions more accurately with a reduced level of noise while preserving image detail.
Several approaches are presented in [59] to enhance mammographic contrast, including classical methods (linguistic hedges and fuzzy enhancement functions), advanced fuzzy sets (intuitive, Pythagorean, and Fermatean fuzzy sets), and genetic algorithm optimization. An advanced fuzzy set provides a more accurate assessment of the uncertainty of the membership function. For this reason, the intuitive method is the most efficient, but most of the other techniques are also effective, depending on the problem. Compared with conventional methods, linguistic methods can provide a more manageable way to spread the histogram, revealing more extreme values. It is possible to obtain a high-quality final image using ordered weighted averaging (OWA) operators combined with enhanced mammography images.

3.4. Assessing Breast Cancer Risk

Due to the high incidence and mortality rates of breast cancer, the physical and mental health of breast cancer patients are adversely affected. There are numerous risk factors, such as age, family history, reproductive factors (e.g., early menarche, late menopause, first pregnancy at a late age, low parity), estrogen (e.g., endogenous or exogenous estrogen), and lifestyle (e.g., smoking, excessive alcohol consumption, dietary fat intake) [60]. Acknowledging and gaining a better understanding of breast cancer risks can promote early detection and prevention.
Numerous studies have extensively explored the use of AI in breast cancer risk prediction. For instance, in a systematic review of ML algorithms for breast cancer risk prediction reported from January 2000 to May 2018, Nindrea et al. found that the SVM algorithm had the highest accuracy compared to ANNs, decision trees, k-nearest neighbor, and naive Bayes algorithms [61]. Several other studies demonstrated that the combination of ANN and cytopathological diagnosis could be used to evaluate breast cancer risk by analyzing and learning mammography results, risk factors, and clinical findings, which can help doctors to make informed estimations of malignancy risk and improve the positive predictive value (PPV) of the decision to perform biopsy [62].
A number of studies [63,64,65,66,67] have employed large cross-sectional screening cohorts representing the general screening population to train DL models. These studies utilized normal mammographic images acquired at least one year before a breast cancer diagnosis or a negative follow-up (i.e., BIRADS 1 and 2). The design of these studies more closely reflects the task of assessing breast cancer risk, as they were aimed at identifying women at high risk before they develop cancer. It is important to use cases and controls of the same age in such a study and to report evaluation measures that are age-adjusted. Otherwise, risk prediction performance estimates can be inflated. The areas under the receiver operating characteristic curves (AUCs) for the models were in the range of 0.60 to 0.84, and they often outperformed state-of-the-art breast cancer risk models [59,60,63,65]. Ha et al. [67] showed that a full-field digital mammography (FFDM)-driven DL risk score was more predictive than the breast imaging and reporting data system (BI-RADS) breast density (odds ratios 4.4 vs. 1.7). FFDM-based DL risk scores outperformed automated breast density measurements according to Dembrower et al. [63]. Finally, Yala et al. found that a hybrid DL model involving both full-field mammograms and traditional risk factors recorded higher accuracy than the Tyrer–Cusick model, which represents the current clinical standard (AUC of 0.68 versus 0.62, respectively) [65]. These studies provide preliminary evidence that FFDM-based DL models offer promise for more accurate prediction of breast cancer risk than density-based models and existing epidemiology-based models. Given the above, the accuracy of AI techniques in the prediction of breast cancer risk is evident, which can undoubtedly help practitioners provide appropriate interventions to minimize the risk of breast cancer. Table 1 presents an overview of key AI papers in mammography.

4. Applications of AI in Ultrasound

Ultrasound is a diagnostic method with a high usage rate; this is due to its strengths of being free from radiation and easy to operate in addition to providing instantaneous results upon operation. Therefore, using ultrasound imaging to detect and diagnose breast cancer has become increasingly common. To address the need for quantification and standardization of ultrasound in order to avoid misdiagnoses (e.g., due to a lack of experience or subjective influence), the development of an AI system to detect and diagnose breast lesions in ultrasound images is proposed in [68]. Other studies [24,69,70] have demonstrated the use of AI systems to identify and segment RoIs, extract related features, and classify benign and malignant lesions in breast ultrasound images. Figure 2 shows ultrasound images of breast.

4.1. Identification and Segmentation of RoIs

Lesions must be identified and segmented from the background to determine an accurate representation for the diagnosis of breast lesions. At present, sonographers are responsible for manually segmenting breast ultrasound images. In addition to an experienced sonographer, this clinical process requires time and effort, and these images have poor contrast, unclear boundaries, and excessive shadowing. Therefore, an automatic segmentation method for breast ultrasound images is recommended. The segmentation process primarily involves detecting RoIs containing lesions and delineating the contours of the lesions. Hu et al. trained a combination of a phase-based active contour (PBAC) model and a dilated fully convolutional network (DFCN) and successfully achieved a high mean dynamic susceptibility contrast (DSC) of 88.97% in identifying and segmenting 170 breast ultrasound images, suggesting its effectiveness in guiding manual segmentation in medical analysis [71]. Kumar et al. demonstrated the performance of a multi-U-Net algorithm for segmenting masses in breast ultrasound images from 258 women, which surpassed the performance of the original U-Net algorithm, with a mean Dice coefficient of 0.82, a true positive value of 0.84, and a false positive value of 0.01 [72]. Feng et al. demonstrated that the performance of a Hausdorff-based fuzzy c-means (FCM) algorithm combined with an adaptive region selection scheme (involving the adaptive selection of the area around each pixel based on the mutual information between regions) for segmenting breast tumors in ultrasound images surpassed that of the Hausdorff-based and traditional FCM algorithms [73]. Using AI to automatically identify and segment breast lesions in ultrasound images can assist sonographers in accurately and efficiently detecting and diagnosing breast cancer.
Numerous researchers have delved into ultrasound-based breast cancer diagnosis, employing a range of methodologies. Early studies predominantly utilized traditional digital image processing techniques and machine learning approaches for detection. For instance, Drucker et al. [74] pioneered the use of radial gradient index filtering to identify initial points within regions, subsequently scrutinizing candidate areas against the background through the optimization of regional average radial gradient indices. Lesions were classified using Bayesian neural networks, yielding a sensitivity of 87%, with a false positive detection rate of 0.76.
Deep learning (DL), emerging as a prominent method in computer vision and pattern recognition, has garnered significant attention in medical research, including breast cancer detection [75]. Cao et al. [76] conducted a comprehensive comparison of five deep learning-based object detection networks, highlighting SSD’s superior performance in terms of precision and recall. In a study focusing on breast lesion detection, Yap et al. [77] employed Faster R-CNN as their deep learning network. To mitigate the impact of small sample datasets, they utilized transfer learning. Additionally, they introduced a three-channel fusion technique, merging original, sharpened, and contrast-enhanced images into a new three-channel image, enhancing detection accuracy.
Li Y et al. [78] developed BUSnet, a DL model, to analyze ultrasound images for the detection of breast tumors. First, a two-stage method was developed, which included a region proposal algorithm for unsupervised regions and a bounding box regression algorithm for supervised regions. A post-processing method was then proposed to further improve the detection accuracy. Using the proposed method, 487 benign samples and 210 malignant samples in a benchmark dataset were analyzed. The results showed that the proposed method proved to be effective and accurate.
A computerized analysis of breast ultrasound images for automatic breast tumor detection, classification, and volume estimation was developed in [79]. The Radiology Department at Thammasat University and the Queen Sirikit Center of Breast Cancer in Thailand provided breast ultrasound images. Among the 655 images, 445 were benign and 210 were malignant. The training and testing datasets were augmented through blur, flip vertical, flip horizontal, and noise transformations. The YOLO7 architecture, based on DL techniques, was then used for tumor detection, localization, and classification. A simple pixel-per-metric technique was used to estimate tumor volume. With a confidence score of 0.95, the model demonstrated excellent tumor detection performance, achieving 95.07% lesion classification accuracy, 94.97% sensitivity, 95.24% specificity, 97.42% PPV, and 90.91% NPV on the test sets.
Chorianopoulos et al. [80] applied three CNN models—MobileNet, VGG16, and AlexNet—to two datasets, one containing ultrasound images and the other containing histopathology images. On the ultrasound dataset, VGG16 achieved the highest accuracy of 96.82%. On the invasive ductal carcinoma dataset, MobileNet achieved the highest accuracy of 91.04%.
Byra et al. [81] proposed a deep-learning method for segmenting breast masses using ultrasound data. CNNs with selective kernels (SKs) were developed. The SKs adjusted the network’s receptive fields by combining dilated and conventional convolutions. Ultrasound images of 882 breast masses were used to create and evaluate the proposed method. Additionally, 893 ultrasound images obtained from three medical centers were tested. The SK-U-Net algorithm achieved a Dice score of 0.826 on 150 ultrasound images, outperforming the regular U-Net algorithm, which scored 0.778. A Dice score of 0.646 to 0.780 was achieved using the proposed method across three datasets.
An ultrasound image segmentation technique based on feature separation and complementation is presented in [82]. Top-to-bottom (T2B) and bottom-to-top (B2T) streams were used for feature separation, and each branch was much more effective at extracting the required feature information. Feature complementation was achieved by combining global semantic information with local detailed information at each stage. The result was a complementary boundary feature in the T2B stream, along with suppressed noise in the B2T stream. We evaluated these techniques using UDIAT, BUSIS, and LUSI, three publicly available datasets. Compared with other current methods for ultrasound image segmentation, the performance of our FSC-Net was at least 1.59%, 0.96%, and 3.74% better than other state-of-the-art methods.

4.2. Feature Extraction

Suspicious masses are typically identified and segmented according to the morphological and textural features in breast images, such as edge, echo type, hardness, orientation, rear features, shape, and location of calcification. Classification of suspicious masses according to the BI-RADS scale can quantify the degree to which cancer is suspected. Accurate identification of morphological features by sonographers is crucial for distinguishing between benign and malignant masses. Using AI systems for feature extraction from breast ultrasound images can assist in the diagnosis process, reducing the substantial demand on sonographers to deliver accurate diagnoses. Using FCM clustering, Hsu et al. found that combining morphological parameters (e.g., the standard deviation of the shortest distance), textural features (e.g., variance), and the Nakagami parameter allowed for high accuracy (89.4%), specificity (86.3%), and sensitivity (92.5%) when extracting physical features of breast ultrasound images. Conversely to using logistic regression and SVM classifiers, the maximum discrimination performance of the optimal feature collection did not depend on the classifier type, suggesting that the functional complementarity of combining different feature parameters can enhance the performance of breast cancer classification [83].
Zhang et al. combined feature learning and feature selection to form a two-layer DL architecture, which recorded an area under the receiver operating characteristic curve of 0.947 and had higher accuracy (93.4%), sensitivity (88.6%), and specificity (97.1%) when extracting and classifying shear wave elastography (SWE) features compared with the statistical features of quantified image intensity and texture [84]. The use of a CAD system (e.g., the Samsung RS80A S-Detect ultrasound system) has been reported to significantly improve the diagnostic performance of radiologists, regardless of their experience in analyzing the ultrasound features of breast masses. Using a CAD system can help to refine the description of breast lesions and enhance consistency among observers regarding the characteristics of breast masses, ultimately resulting in better decision-making [85].
Jabeen et al. [86] proposed a new framework for the classification of breast cancer based on ultrasound images by combining DL and best-selected features. The proposed framework includes five major steps: (1) data augmentation to enhance the size of the original dataset for learning CNNs; (2) modification of the output layer of the pre-trained DarkNet-53 model based on the augmented dataset classes; (3) training of the modified model by transfer learning and use of the global average pooling layer to extract features; (4) use of reformed differential evaluation, reformed grey wolf (RGW), and improved optimization algorithms to select the best features; and (5) use of a new probability-based serial approach and ML algorithms to fuse together and classify the best-selected features. In the experiment, the best accuracy was 99.1% based on augmented breast ultrasound images (BUSIs). The proposed framework performed better than other recent techniques.
Breast cancer data were successfully analyzed in [87] using LeNet, a classic CNN architecture. The system demonstrated high accuracy in early detection and diagnosis of breast cancer by extracting discriminative features and classifying malignant and benign tumors. As a result of addressing the “dying ReLU” problem and improving the discriminative power of the extracted features, LeNet with a corrected rectified linear unit (ReLU) demonstrated enhanced performance in breast cancer data analysis tasks, making breast cancer detection and diagnosis more accurate and reliable. LeNet’s training stability and performance can be improved through batch normalization. This method is useful in mitigating the effects of internal covariate shifts, which refers to changes in the distribution of network activation due to training. There will be reductions in the over-fitting problem and the running time with the use of this classifier. A comparison of the designed classifier to benchmark DL models showed that it had a higher recognition rate, with 89.91% of breast images recognized accurately.

4.3. Applications of AI in Thermography Images

DL has emerged as a powerful tool in the field of medical image analysis, including breast cancer classification from thermography images. Thermography, which captures the heat patterns emitted by the body, offers a non-invasive and radiation-free alternative to traditional imaging modalities. DL algorithms, such as CNNs, have demonstrated remarkable capability in extracting intricate patterns and features from thermal images. By training the model on diverse datasets, it learns to discern subtle temperature variations associated with malignant and benign breast tissue. The utilization of DL in thermography-based breast cancer classification holds promise for improving diagnostic accuracy and early detection, potentially enhancing the effectiveness of screening programs and contributing to more personalized and timely patient care [88].
Thermal images from the Database for Mastology Research with Infrared Images (DMR-IR) were employed in [89] to explore the effectiveness of VGG16, a pre-trained CNN architecture, in combination with attention mechanisms (AMs) for diagnosing breast cancer. The investigation focused on three variants of the model, each incorporating a distinct type of AM. The methodology revealed consistency in the performance of these models across all stages of the study. Notably, the test accuracy of the VGG16 model coupled with AMs on the breast thermal dataset demonstrated promising results, reaching 99.80%, 99.49%, and 99.32%. Compared to VGG16 without AMs, the test accuracy of VGG16 with AMs exhibited notable improvement of 0.62%, underlining the potential of using attention mechanisms to enhance diagnostic performance for classifying breast cancer in thermal images.
A two-stage model for breast cancer detection utilizing thermographic images is introduced in [90]. The first stage involves feature extraction from images using the VGG16 DL model. In the second stage, the dragonfly algorithm (DA), a metaheuristic algorithm, is used to select the optimal subset of features. To enhance the performance of the DA, a memory-based version incorporating the Grunwald–Letnikov (GL) method is proposed. The efficacy of the two-stage framework was assessed using the DMR-IR standard dataset. Impressively, the proposed model demonstrated the ability to efficiently filter out non-essential features, achieving a diagnostic accuracy of 100% on the standard dataset. Furthermore, it achieved this accuracy with 82% fewer features compared to the VGG16 model, highlighting the potential of the approach for improving both efficiency and accuracy in detecting breast cancer in thermographic images. Tello-Mijares et al. [88] focused on a segmentation method that combines the curvature function, k, and the gradient vector flow, while for classification they proposed a CNN using the segmented breast. The primary objective of the study was to compare the results of the CNN with other classification techniques. Each breast was characterized by its distinct shape, color, and texture and whether it was the left or right breast. These features were utilized to both train the models and evaluate the performance of the CNN against three other classification techniques: a tree random forest (TRF), a multilayer perceptron (MLP), and the Bayes network (BN). The findings revealed that CNN outperformed the TRF, MLP, and BN, demonstrating its superiority in breast characterization and classification based on shape, color, and texture features. Figure 3 shows the main stages of the proposed method for breast cancer segmentation from thermography images.

4.4. Benign and Malignant Classifications

Due to the high incidence and mortality of breast cancer among women globally, various measures have been developed to promote breast cancer screening for women of appropriate ages. The most significant aspect of breast cancer screening is detecting benign cases and malignant cases. Classification of breast lesions in ultrasound images is primarily based on the BI-RADS. In order to ensure consistency of the interpretations made by doctors with different experience levels, there have been growing efforts to develop AI systems for benign and malignant classification. Cirtisis et al. demonstrated that the use of a deep convolution neural network (dCNN) yielded comparable accuracy (93.1%; external: 95.3%) when classifying breast ultrasound images into a BI-RADS score of 2–3 and a BI-RADS score of 4–5 compared to the classification accuracy of radiologists (91.6 ± 5.4%; external: 94.1 ± 1.2%) [91]. Meanwhile, Becker et al. reported that a DL model trained on 445 cases had comparable accuracy when analyzing 637 breast ultrasound images (84 malignant lesions and 553 benign lesions) to that of a radiologist and better accuracy than that of a medical student (similarly trained with 445 cases) [92]. Recently, researchers started using automatic search methods to design CNN architectures from scratch for medical imaging, including breast cancer imaging. Ahmed et al. [93] used the efficient neural architecture search (ENAS) method to generate a model, which achieved 89.3% overall accuracy, outperforming other manually designed alternatives. Additionally, the ENAS-generated model had simplified complexity and greater efficiency. To investigate the generalization of the ENAS-based model [94], they evaluated the model on an external dataset. To address the challenge of generalization error, they investigated various techniques, such as reducing model complexity, employing data augmentation, and utilizing unbalanced training sets. The experimental findings indicate that the ENAS model trained on an unbalanced dataset with more benign images generalized well on the external dataset and two external datasets. Alzhoubi et al. [95] used a Bayesian optimizer as a search strategy to automatically search for CNN architecture for breast cancer classification. The result showed that the automatically generated CNN outperformed transfer learning CNN models on internal and external test sets. Ahmed et al. [96] proposed an automatic search environment for designing a CNN model for breast cancer classification from ultrasound images by combining the ENAS method with the Bayesian optimizer. Their method consists of two main steps: First, they used ENAS to generate optimal cells (normal and reduction), and second, they used Bayesian optimization to search for the number of cells per CNN architecture and trainable hyperparameters. The generated ENAS-B model outperformed the original ENAS and transfer-learning models in breast cancer classification. Figure 4 presents the proposed approach for automatically searching for CNN for breast cancer classification from ultrasound images.
The use of AI can significantly assist physicians in classifying and diagnosing benign and malignant cases in breast ultrasound images, particularly with regard to improving the diagnostic accuracy of inexperienced doctors. Table 2 presents key AI papers used ultrasound images for breast cancer.

5. Applications of AI in MRI Images

Magnetic resonance imaging (MRI) stands as a pivotal imaging modality in the comprehensive arsenal deployed for diagnosing breast cancer. With its ability to provide detailed anatomical images and delineate soft tissue structures with high precision, MRI plays a crucial role in detecting and characterizing breast lesions.
Slimani [97] introduced a groundbreaking method known as the 3D automatic level propagation approach (3D ALPA), a sophisticated technique devised to enhance the accuracy and efficiency of tumor volume reconstruction in breast MRI data. This innovative approach comprises two meticulously crafted steps. Firstly, the entire volume slated for processing undergoes segmentation, meticulously dissected slice by slice through a combination of global thresholding operations and morphological closure techniques. This step ensures the precise isolation of relevant structures within the volumetric data. Subsequently, in the second step, the segmented results from each slice are ingeniously fused together to meticulously reconstruct the original 3D volume housing the tumor, thus providing clinicians with a comprehensive representation for diagnosis and treatment planning. In a similar vein, Pandey et al. [98] pioneered a fully automatic and unsupervised approach by integrating the continuous max flow (CMF) method with sophisticated noise reduction algorithms and morphological operations. Their methodological innovation represents a significant stride towards automating the process of lesion detection and segmentation in breast MRI, streamlining clinical workflows and reducing dependency on manual intervention.
Furthermore, recent advancements in deep learning methodologies have catalyzed a paradigm shift in breast tumor targeting. The UNet architecture, a deep learning framework renowned for its efficacy in semantic segmentation tasks, has emerged as a frontrunner in automated breast lesion segmentation. Chen et al. [99] further pushed the boundaries by proposing an end-to-end network that harnesses both spatial and temporal resources, culminating in a fully automated approach to breast lesion segmentation. Their novel adaptation of the UNet architecture, coupled with the integration of ConvLSTM structures, demonstrates the potential for leveraging temporal information to enhance segmentation accuracy and robustness.
Moreover, Benjelloun et al. [100] and El Adoui et al. [101] pioneered innovative methodologies based on the UNet framework, tailoring their approaches to address the intricacies of segmentation within individual image slices. Their contributions not only underscore the versatility of deep learning frameworks but also highlight the nuanced challenges inherent in breast lesion segmentation tasks. In parallel, Lu et al. [102] and Santucci et al. [103] embraced the power of convolutional neural networks (CNNs), leveraging their inherent capability to learn complex patterns and extract meaningful features directly from input data. The utilization of CNNs, augmented with a softmax function in its outputs, enables seamless mapping of input images to final labels, obviating the need for laborious feature extraction steps. Additionally, their utilization of proprietary databases underscores the critical role of data accessibility and quality in training robust deep learning models for clinical applications.

6. Discussion

Despite tremendous advancements in the medical field over the past decade with the introduction of AI techniques, the integration and large-scale application of these techniques are still in the initial stages. CAD systems have several limitations regarding breast cancer screening, such as the existence of few large-scale public datasets, reliance on RoI annotation, high image-quality requirements, regional differences, and over-fitting and binary classification issues. Furthermore, AI techniques cannot handle multiple tasks concurrently, which can be challenging for the development of DL models for breast imaging. These issues are driving the development of breast imaging diagnostic discipline and reflect the broad prospects of intelligent medical imaging.
Apart from using AI techniques in conventional imaging methods, DL-based CAD systems have been under rapid development for digital breast tomosynthesis [104,105,106,107,108,109,110,111,112,113,114,115,116], ultrasound [107,108], and contrast-enhanced mammography. AI in breast imaging can be used to detect, classify, and predict breast diseases; classify specific breast diseases (e.g., fibroplasia); and even predict lymph node metastasis [109] and disease recurrence [110]. With technological advancements in the field of AI, the classification and diagnosis of breast diseases and the establishment of adjuvant treatment are expected to become more efficient and accurate, allowing for more effective early detection, diagnosis, and treatment for patients.
AI techniques, especially DL, have been increasingly used in medical imaging due to their promising potential and outstanding performance in analyzing medical images. AI techniques can deliver fast computing speeds with good repeatability and no fatigue, ensuring that doctors obtain highly accurate and objective information, thus helping to minimize their workload as well as misdiagnoses or missed diagnoses [111]. The use of CAD systems for breast cancer screening has been explored in numerous studies. These systems can reliably identify and segment breast lesions, extract and classify features, estimate breast disease and breast cancer risk, and evaluate treatment effects and prognoses regardless of the medical imaging method used (e.g., mammography, ultrasound, MRI, or other types of imaging) [112,113,114,115,116]. These CAD systems are highly promising and provide various advantages in terms of assisting doctors, optimizing resource allocation, and improving accuracy.

7. Conclusions

This paper has undertaken a comprehensive review of recent advancements in machine learning and deep learning techniques for the detection and classification of breast cancer. By exploring various methodologies employed across diverse medical image types, the study aimed to provide a thorough understanding of current approaches in breast cancer identification. Special emphasis was placed on traditional machine learning and deep learning methods, highlighting their significance in this domain.
Therefore, developing AI for breast cancer recognition faces numerous challenges, including limited and imbalanced datasets, the need for interpretable AI in medical decision-making, privacy concerns surrounding sensitive medical data, ethical considerations related to biases in data, the resource-intensive manual annotation of datasets, the constant evolution of technology requiring ongoing updates, and the necessity for regulatory compliance before deployment in clinical settings. Moreover, overcoming these challenges necessitates collaboration among researchers, healthcare professionals, regulators, and technology developers to create ethically sound and effective AI solutions for breast cancer detection and diagnosis.

Author Contributions

D.A.-K., S.A.-Z. and M.H.A. reviewed the AI literature and wrote the initial draft of this manuscript. K.A.H., N.O., A.M.M., T.A., B.A.A. and M.S. reviewed the medical literature and prepared the manuscript. All authors contributed to the interpretation and revision of this article and jointly completed the final draft. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

Author Dhurgham Al-Karawi and Shakir Al-Zaidi was employed by the company Medical Analytica Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Yassin, N.I.; Omran, S.; El Houby, E.M.; Allam, H. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. Comput. Methods Programs Biomed. 2018, 156, 25–45. [Google Scholar] [CrossRef]
  2. El-Nabawy, A.; El-Bendary, N.; Belal, N.A. A feature-fusion framework of clinical, genomics, and histopathological data for METABRIC breast cancer subtype classification. Appl. Soft Comput. 2020, 91, 106238. [Google Scholar] [CrossRef]
  3. Aggarwal, R.; Sounderajah, V.; Martin, G.; Ting, D.S.W.; Karthikesalingam, A.; King, D.; Ashrafian, H.; Darzi, A. Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. npj Digit. Med. 2021, 4, 65. [Google Scholar] [CrossRef]
  4. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA A Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef]
  5. Iranmakani, S.; Mortezazadeh, T.; Sajadian, F.; Ghaziani, M.F.; Ghafari, A.; Khezerloo, D.; Musa, A.E. A review of various modalities in breast imaging: Technical aspects and clinical outcomes. Egypt. J. Radiol. Nucl. Med. 2020, 51, 57. [Google Scholar] [CrossRef]
  6. Nassif, A.B.; Abu Talib, M.; Nasir, Q.; Afadar, Y.; Elgendy, O. Breast cancer detection using artificial intelligence techniques: A systematic literature review. Artif. Intell. Med. 2022, 127, 102276. [Google Scholar] [CrossRef]
  7. Yao, H.; Zhang, X.; Zhou, X.; Liu, S. Parallel Structure Deep Neural Network Using CNN and RNN with an Attention Mechanism for Breast Cancer Histology Image Classification. Cancers 2019, 11, 1901. [Google Scholar] [CrossRef]
  8. Ha, R.; Chang, P.; Mutasa, S.; Karcich, J.; Goodman, S.; Blum, E.; Kalinsky, K.; Liu, M.Z.; Jambawalikar, S. Convolutional Neural Network Using a Breast MRI Tumor Dataset Can Predict Oncotype Dx Recurrence Score. J. Magn. Reson. Imaging 2019, 49, 518–524. [Google Scholar] [CrossRef]
  9. Mohaideen, K.; Negi, A.; Verma, D.K.; Kumar, N.; Sennimalai, K.; Negi, A. Applications of artificial intelligence and machine learning in orthognathic surgery: A scoping review. J. Stomatol. Oral Maxillofac. Surg. 2022, 123, e962–e972. [Google Scholar] [CrossRef]
  10. Derevianko, A.; Pizzoli, S.F.M.; Pesapane, F.; Rotili, A.; Monzani, D.; Grasso, R.; Cassano, E.; Pravettoni, G. The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor–Patient Communication in Cancer Diagnosis? Cancers 2023, 15, 470. [Google Scholar] [CrossRef]
  11. Al-Karawi, D.; Al-Assam, H.; Du, H.; Sayasneh, A.; Landolfo, C.; Timmerman, D.; Bourne, T.; Jassim, S. An Evaluation of the Effectiveness of Image-based Texture Features Extracted from Static B-mode Ultrasound Images in Distinguishing between Benign and Malignant Ovarian Masses. Ultrason. Imaging 2021, 43, 124–138. [Google Scholar] [CrossRef]
  12. Al-Karawi, D.; Sayasneh, A.; Al-Assam, H.; Jassim, S.; Page, N.; Timmerman, D.; Bourne, T.; Du, H. An automated technique for potential differentiation of ovarian mature teratomas from other benign tumours using neural networks classification of 2D ultrasound static images: A pilot study. In Proceedings of the Mobile Multimedia/Image Processing, Security, and Applications, Anaheim, CA, USA, 10–11 April 2017; Volume 10221, pp. 77–84. [Google Scholar]
  13. Al-Karawi, D.; Landolfo, C.; Du, H.; Al-Assam, H.; Sayasneh, A.; Timmerman, D.; Bourne, T.; Jassim, S. Prospective clinical evaluation of texture-based features analysis of ultrasound ovarian scans for distinguishing benign and malignant adnexal tumors. Australas. J. Ultrasound Med. 2019, 22, 144. [Google Scholar] [CrossRef]
  14. Wu, M.; Yan, C.; Liu, H.; Liu, Q. Automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks. Biosci. Rep. 2018, 38, BSR20180289. [Google Scholar] [CrossRef]
  15. Shenbagavalli, P.; Thangarajan, R. Aiding the Digital Mammogram for Detecting the Breast Cancer Using Shearlet Transform and Neural Network. Asian Pac. J. Cancer Prev. APJCP 2018, 19, 2665–2671. [Google Scholar]
  16. Karacan, K.; Uyar, T.; Tunga, B.; Tunga, M.A. A novel multistage CAD system for breast cancer diagnosis. Signal Image Video Process. 2023, 17, 2359–2368. [Google Scholar] [CrossRef]
  17. Rahman, H.; Bukht, T.F.N.; Ahmad, R.; Almadhor, A.; Javed, A.R. Efficient Breast Cancer Diagnosis from Complex Mammographic Images Using Deep Convolutional Neural Network. Comput. Intell. Neurosci. 2023, 2023, 7717712. [Google Scholar] [CrossRef]
  18. Abdelhafiz, D.; Yang, C.; Ammar, R.; Nabavi, S. Deep convolutional neural networks for mammography: Advances, challenges and applications. BMC Bioinform. 2019, 20, 281. [Google Scholar] [CrossRef]
  19. Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69, S36–S40. [Google Scholar] [CrossRef]
  20. Dhanalakshmi, R.; Anand, J. Big data for personalized healthcare. In Handbook of Intelligent Healthcare Analytics: Knowledge Engineering with Big Data Analytics; Scrivener Publishing LLC.: Beverly, MA, USA, 2022; pp. 67–92. [Google Scholar]
  21. Al-Antari, M.A.; Al-Masni, M.A.; Choi, M.T.; Han, S.M.; Kim, T.S. A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification. Int. J. Med. Inform. 2018, 117, 44–54. [Google Scholar] [CrossRef]
  22. Rodriguez-Ruiz, A.; Lång, K.; Gubern-Merida, A.; Teuwen, J.; Broeders, M.; Gennaro, G.; Clauser, P.; Helbich, T.H.; Chevalier, M.; Mertelmeier, T.; et al. Can we reduce the workload of mammographic screening by automatic identification of normal exams with artificial intelligence? A feasibility study. Eur. Radiol. 2019, 29, 4825–4832. [Google Scholar] [CrossRef]
  23. Yanagawa, M.; Niioka, H.; Hata, A.; Kikuchi, N.; Honda, O.; Kurakami, H.; Morii, E.; Noguchi, M.; Watanabe, Y.; Miyake, J.; et al. Application of deep learning (3-dimensional convolutional neural network) for the prediction of pathological invasiveness in lung adenocarcinoma: A preliminary study. Medicine 2019, 98, e16119. [Google Scholar] [CrossRef]
  24. Le, E.; Wang, Y.; Huang, Y.; Hickman, S.; Gilbert, F. Artificial intelligence in breast imaging. Clin. Radiol. 2019, 74, 357–366. [Google Scholar] [CrossRef]
  25. Suckling, J.; Parker, J.; Dance, D.; Astley, S.; Hutt, I.; Boggis, C.; Ricketts, I.; Stamatakis, E.; Cerneaz, N.; Kok, S.; et al. Mammographic Image Analysis Society (MIAS) Database v1.21, Apollo—University of Cambridge Repository: Cambridge, UK, 2015. [CrossRef]
  26. Heath, M.; Bowyer, K.; Kopans, D.; Kegelmeyer, P., Jr.; Moore, R.; Chang, K.; Munishkumaran, S. Current status of the digital database for screening mammography. In Digital Mammography: Nijmegen; Springer: Dordrecht, The Netherlands, 1998; Volume 13, pp. 457–460. [Google Scholar] [CrossRef]
  27. Moreira, I.; Amaral, I.; Domingues, I.; Cardoso, A.; Cardoso, M.; Cardoso, J. INbreast: Toward a full-field digital mammographic database. Acad. Radiol. 2011, 19, 236–248. [Google Scholar] [CrossRef]
  28. Lopez, M.G.; Posada, N.; Moura, D.C.; Pollán, R.R.; Valiente, J.M.; Ortega, C.S.; Solar, M.; Diaz-Herrero, G.; Ramos, I.M.; Loureiro, J.; et al. BCDR: A breast cancer digital repository. In Proceedings of the 15th International Conference on Experimental Mechanics, Porto, Portugal, 22–27 July 2012; Volume 1215, pp. 113–120. [Google Scholar]
  29. Arevalo, J.; Gonz, F.A.; Ramos-Poll, R.; Oliveira, J.L.; Lopez, M.A.G. Representation learning for mammography mass lesion classification with convolutional neural networks. Comput. Methods Progr. Biomed. 2016, 127, 248–257. [Google Scholar] [CrossRef]
  30. Matheus, B.R.N.; Schiabel, H. Online mammographic images database fordevelopment and comparison of cad schemes. J. Digit. Imag. 2011, 24, 500–506. [Google Scholar] [CrossRef]
  31. Badano, A.; Graff, C.G.; Badal, A.; Sharma, D.; Zeng, R.; Samuelson, F.W.; Glick, S.J.; Myers, K.J. Evaluation of digital breast tomosynthesis as replacement of full-field digital mammography using an in silico imaging trial. JAMA Netw. Open 2018, 1, e185474. [Google Scholar] [CrossRef]
  32. Halling-Brown, M.D.; Warren, L.M.; Ward, D.; Lewis, E.; Mackenzie, A.; Wallis, M.G.; Wilkinson, L.S.; Given-Wilson, R.M.; McAvinchey, R.; Young, K.C. Optimam mammography image database: A large-scale resource of mammography images and clinical data. Radiology. Artif. Intell. 2020, 3, e200103. [Google Scholar] [CrossRef]
  33. Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform. 2017, 22, 1218–1226. [Google Scholar] [CrossRef]
  34. Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of breast ultrasound images. Data Brief 2020, 28, 104863. [Google Scholar] [CrossRef]
  35. Zhang, Y.; Xian, M.; Cheng, H.D.; Shareef, B.; Ding, J.; Xu, F.; Huang, K.; Zhang, B.; Ning, C.; Wang, Y. BUSIS: A Benchmark for Breast Ultrasound Image Segmentation. Healthcare 2022, 10, 729. [Google Scholar] [CrossRef]
  36. Welch, H.G.; Prorok, P.C.; O’Malley, A.J.; Kramer, B.S. Breast-Cancer Tumor Size, Overdiagnosis, and Mammography Screening Effectiveness. N. Engl. J. Med. 2016, 375, 1438–1447. [Google Scholar] [CrossRef]
  37. Ali, M.A.; Eriksson, M.; Czene, K.; Hall, P.; Humphreys, K. Detection of potential microcalcification clusters using multivendor for-presentation digital mammograms for short-term breast cancer risk estimation. Med. Phys. 2019, 46, 1938–1946. [Google Scholar]
  38. Parvathavarthini, S.; Shanthi, S. Breast Cancer Detection using Crow Search Optimization based Intuitionistic Fuzzy Clustering with Neighborhood Attraction. Asian Pac. J. Cancer Prev. 2019, 20, 157–165. [Google Scholar]
  39. Jiang, Y.; Inciardi, M.F.; Edwards, A.V.; Papaioannou, J. Interpretation Time Using a Concurrent-Read Computer-Aided Detection System for Automated Breast Ultrasound in Breast Cancer Screening of Women with Dense Breast Tissue. Am. J. Roentgenol. 2018, 211, 452–461. [Google Scholar] [CrossRef]
  40. Fan, M.; Li, Y.; Zheng, S.; Peng, W.; Tang, W.; Li, L. Computer-aided detection of mass in digital breast tomosynthesis using a faster region-based convolutional neural network. Methods 2019, 166, 103–111. [Google Scholar] [CrossRef]
  41. Ertosun, M.G.; Rubin, D.L. Probabilistic visual search for masses within mammography images using deep learning. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, USA, 9–12 November 2015; Volume 2015, pp. 1310–1315. [Google Scholar] [CrossRef]
  42. Al-Masni, M.A.; Al-Antari, M.A.; Park, J.-M.; Gi, G.; Rivera, P.; Valarezo, E.; Choi, M.-T.; Han, S.-M.; Kim, T.-S. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning yolo-based cad system. Comput. Methods Progr. Biomed. 2018, 157, 85–94. [Google Scholar] [CrossRef]
  43. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Realtime Object Detection. 2015. Available online: http://arxiv.org/abs/1506.02640 (accessed on 4 March 2024).
  44. Ribli, D.; Horv, A.; Unger, Z.; Pollner, P.; Csabai, I. Detecting and classifying lesions in mammograms with deep learning. Sci. Rep. 2018, 8, 4165. [Google Scholar] [CrossRef]
  45. Zhu, W.; Lou, Q.; Vang, Y.S.; Xie, X. Deep multi-instance networks with sparse label assignment for whole mammogram classification. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017; Proceedings, Part III 20. Springer International Publishing: Cham, Switzerland, 2017; pp. 603–611. [Google Scholar] [CrossRef]
  46. Bouzar-Benlabiod, L.; Harrar, K.; Yamoun, L.; Khodja, M.Y.; Akhloufi, M.A. A novel breast cancer detection architecture based on a CNN-CBR system for mammogram classification. Comput. Biol. Med. 2023, 163, 107133. [Google Scholar] [CrossRef]
  47. Gao, F.; Wu, T.; Li, J.; Zheng, B.; Ruan, L.; Shang, D.; Patel, B. SD-CNN: A shallow-deep CNN for improved breast cancer diagnosis. Comput. Med. Imaging Graph. 2018, 70, 53–62. [Google Scholar] [CrossRef]
  48. Elkorany, A.S.; Elsharkawy, Z.F. Efficient breast cancer mammograms diagnosis using three deep neural networks and term variance. Sci. Rep. 2023, 13, 2663. [Google Scholar] [CrossRef]
  49. Yu, X.; Pang, W.; Xu, Q.; Liang, M. Mammographic image classification with deep fusion learning. Sci. Rep. 2020, 10, 14361. [Google Scholar] [CrossRef]
  50. Hmida, M.; Hamrouni, K.; Solaiman, B.; Boussetta, S. Mammographic mass segmentation using fuzzy contours. Comput. Methods Programs Biomed. 2018, 164, 131–142. [Google Scholar] [CrossRef]
  51. Kashyap, K.L.; Bajpai, M.K.; Khanna, P. Globally supported radial basis function based collocation method for evolution of level set in mass segmentation using mammograms. Comput. Biol. Med. 2017, 87, 22–37. [Google Scholar] [CrossRef]
  52. Hussain, S.; Xi, X.; Ullah, I.; Inam, S.A.; Naz, F.; Shaheed, K.; Ali, S.A.; Tian, C. A Discriminative Level Set Method with Deep Supervision for Breast Tumor Segmentation. Comput. Biol. Med. 2022, 149, 105995. [Google Scholar] [CrossRef]
  53. Pezeshki, H. Breast tumor segmentation in digital mammograms using spiculated regions. Biomed. Signal Process. Control. 2022, 76, 103652. [Google Scholar] [CrossRef]
  54. Salama, W.M.; Aly, M.H. Deep learning in mammography images segmentation and classification: Automated CNN approach. Alex. Eng. J. 2021, 60, 4701–4709. [Google Scholar] [CrossRef]
  55. Li, S.; Dong, M.; Du, G.; Mu, X. Attention Dense-U-Net for Automatic Breast Mass Segmentation in Digital Mammogram. IEEE Access 2019, 7, 59037–59047. [Google Scholar] [CrossRef]
  56. Sani, Z.; Prasad, R.; Hashim, E.K.M. Grouped mask region convolution neural networks for improved breast cancer segmentation in mammography images. Evol. Syst. 2024, 15, 25–40. [Google Scholar] [CrossRef]
  57. Teare, P.; Fishman, M.; Benzaquen, O.; Toledano, E.; Elnekave, E. Malignancy Detection on Mammography Using Dual Deep Convolutional Neural Networks and Genetically Discovered False Color Input Enhancement. J. Digit. Imaging 2017, 30, 499–505. [Google Scholar] [CrossRef]
  58. Suradi, S.H.; Abdullah, K.A.; Isa, N.A.M. Improvement of image enhancement for mammogram images using Fuzzy Anisotropic Diffusion Histogram Equalisation Contrast Adaptive Limited (FADHECAL). Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2021, 10, 67–75. [Google Scholar] [CrossRef]
  59. Dounis, A.; Avramopoulos, A.-N.; Kallergi, M. Advanced Fuzzy Sets and Genetic Algorithm Optimizer for Mammographic Image Enhancement. Electronics 2023, 12, 3269. [Google Scholar] [CrossRef]
  60. Sun, Y.-S.; Zhao, Z.; Yang, Z.-N.; Xu, F.; Lu, H.-J.; Zhu, Z.-Y.; Shi, W.; Jiang, J.; Yao, P.-P.; Zhu, H.-P. Risk Factors and Preventions of Breast Cancer. Int. J. Biol. Sci. 2017, 13, 1387–1397. [Google Scholar] [CrossRef] [PubMed]
  61. Nindrea, R.D.; Aryandono, T.; Lazuardi, L.; Dwiprahasto, I. Diagnostic Accuracy of Different Machine Learning Algorithms for Breast Cancer Risk Calculation: A Meta-Analysis. Asian Pac. J. Cancer Prev. 2018, 19, 1747–1752. [Google Scholar] [PubMed]
  62. Sepandi, M.; Taghdir, M.; Rezaianzadeh, A.; Rahimikazerooni, S. Assessing Breast Cancer Risk with an Artificial Neural Network. Asian Pac. J. Cancer Prev. APJCP 2018, 19, 1017–1019. [Google Scholar] [PubMed]
  63. Dembrower, K.; Liu, Y.; Azizpour, H.; Eklund, M.; Smith, K.; Lindholm, P.; Strand, F. Comparison of a Deep Learning Risk Score and Standard Mammographic Density Score for Breast Cancer Risk Prediction. Radiology 2020, 294, 265–272. [Google Scholar] [CrossRef] [PubMed]
  64. Arefan, D.; Mohamed, A.A.; Berg, W.A.; Zuley, M.L.; Sumkin, J.H.; Wu, S. Deep learning modeling using normal mammograms for predicting breast cancer risk. Med. Phys. 2019, 47, 110–118. [Google Scholar] [CrossRef] [PubMed]
  65. Yala, A.; Lehman, C.; Schuster, T.; Portnoi, T.; Barzilay, R. A Deep Learning Mammography-based Model for Improved Breast Cancer Risk Prediction. Radiology 2019, 292, 60–66. [Google Scholar] [CrossRef]
  66. Yala, A.; Mikhael, P.G.; Strand, F.; Lin, G.; Smith, K.; Wan, Y.-L.; Lamb, L.; Hughes, K.; Lehman, C.; Barzilay, R. Toward robust mammography-based models for breast cancer risk. Sci. Transl. Med. 2021, 13, eaba4373. [Google Scholar] [CrossRef]
  67. Ha, R.; Chang, P.; Karcich, J.; Mutasa, S.; Van Sant, E.P.; Liu, M.Z.; Jambawalikar, S. Convolutional Neural Network Based Breast Cancer Risk Stratification Using a Mammographic Dataset. Acad. Radiol. 2019, 26, 544–549. [Google Scholar] [CrossRef]
  68. Akkus, Z.; Cai, J.; Boonrod, A.; Zeinoddini, A.; Weston, A.D.; Philbrick, K.A.; Erickson, B.J. A Survey of Deep-Learning Applications in Ultrasound: Artificial Intelligence–Powered Ultrasound for Improving Clinical Workflow. J. Am. Coll. Radiol. 2019, 16, 1318–1328. [Google Scholar] [CrossRef]
  69. Park, H.J.; Kim, S.M.; La Yun, B.; Jang, M.; Kim, B.; Jang, J.Y.; Lee, J.Y.; Lee, S.H. A computer-aided diagnosis system using artificial intelligence for the diagnosis and characterization of breast masses on ultrasound: Added value for the inexperienced breast radiologist. Medicine 2019, 98, e14146. [Google Scholar] [CrossRef]
  70. Wu, G.-G.; Zhou, L.-Q.; Xu, J.-W.; Wang, J.-Y.; Wei, Q.; Deng, Y.-B.; Cui, X.-W.; Dietrich, C.F. Artificial intelligence in breast ultrasound. World J. Radiol. 2019, 11, 19–26. [Google Scholar] [CrossRef] [PubMed]
  71. Hu, Y.; Guo, Y.; Wang, Y.; Yu, J.; Li, J.; Zhou, S.; Chang, C. Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model. Med. Phys. 2019, 46, 215–228. [Google Scholar] [CrossRef]
  72. Kumar, V.; Webb, J.M.; Gregory, A.; Denis, M.; Meixner, D.D.; Bayat, M.; Whaley, D.H.; Fatemi, M.; Alizad, A. Automated and real-time segmentation of suspicious breast masses using convolutional neural network. PLoS ONE 2018, 13, e0195816. [Google Scholar] [CrossRef] [PubMed]
  73. Feng, Y.; Guo, H.; Zhang, H.; Li, C.; Sun, L.; Mutic, S.; Ji, S.; Hu, Y. A modified fuzzy C-means method for segmenting MR images using non-local information. Technol. Health Care 2016, 24, S785–S793. [Google Scholar] [CrossRef] [PubMed]
  74. Drukker, K.; Giger, M.L.; Horsch, K.; Kupinski, M.A.; Vyborny, C.J.; Mendelson, E.B. Computerized lesion detection on breast ultrasound. Med. Phys. 2002, 29, 1438–1446. [Google Scholar] [CrossRef] [PubMed]
  75. Li, Y.; Wu, W.; Chen, H.; Cheng, L.; Wang, S. 3D tumor detection in automated breast ultrasound using deep convolutional neural network. Med. Phys. 2020, 47, 5669–5680. [Google Scholar] [CrossRef] [PubMed]
  76. Cao, Z.; Duan, L.; Yang, G.; Yue, T.; Chen, Q. An experimental study on breast lesion detection and classification from ultrasound images using deep learning architectures. BMC Med. Imaging 2019, 19, 51. [Google Scholar] [CrossRef] [PubMed]
  77. Yap, M.H.; Goyal, M.; Osman, F.; Martí, R.; Denton, E.; Juette, A.; Zwiggelaar, R. Breast ultrasound region of interest detection and lesion localisation. Artif. Intell. Med. 2020, 107, 101880. [Google Scholar] [CrossRef]
  78. Li, Y.; Gu, H.; Wang, H.; Qin, P.; Wang, J. BUSnet: A Deep Learning Model of Breast Tumor Lesion Detection for Ultrasound Images. Front. Oncol. 2022, 12, 848271. [Google Scholar] [CrossRef]
  79. Labcharoenwongs, P.; Vonganansup, S.; Chunhapran, O.; Noolek, D.; Yampaka, T. An Automatic Breast Tumor Detection and Classification including Automatic Tumor Volume Estimation Using Deep Learning Technique. Asian Pac. J. Cancer Prev. 2023, 24, 1081–1088. [Google Scholar] [CrossRef]
  80. Chorianopoulos, A.M.; Daramouskas, I.; Perikos, I.; Grivokostopoulou, F.; Hatzilygeroudis, I. Deep learning methods in medical imaging for the recognition of breast cancer. In Proceedings of the 2020 11th International Conference on Information, Intelligence, Systems and Applications (IISA), Piraeus, Greece, 15–17 July 2020. [Google Scholar]
  81. Byra, M.; Jarosik, P.; Szubert, A.; Galperin, M.; Ojeda-Fournier, H.; Olson, L.; O’boyle, M.; Comstock, C.; Andre, M. Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network. Biomed. Signal Process. Control. 2020, 61, 102027. [Google Scholar] [CrossRef]
  82. Zhu, Y.; Li, C.; Hu, K.; Luo, H.; Zhou, M.; Li, X.; Gao, X. A new two-stream network based on feature separation and complementation for ultrasound image segmentation. Biomed. Signal Process. Control. 2023, 82, 104567. [Google Scholar] [CrossRef]
  83. Hsu, S.-M.; Kuo, W.-H.; Kuo, F.-C.; Liao, Y.-Y. Breast tumor classification using different features of quantitative ultrasound parametric images. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 623–633. [Google Scholar] [CrossRef]
  84. Zhang, Q.; Xiao, Y.; Dai, W.; Suo, J.; Wang, C.; Shi, J.; Zheng, H. Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics 2016, 72, 150–157. [Google Scholar] [CrossRef]
  85. Choi, J.-H.; Kang, B.J.; Baek, J.E.; Lee, H.S.; Kim, S.H. Application of computer-aided diagnosis in breast ultrasound interpretation: Improvements in diagnostic performance according to reader experience. Ultrasonography 2018, 37, 217–225. [Google Scholar] [CrossRef]
  86. Jabeen, K.; Khan, M.A.; Alhaisoni, M.; Tariq, U.; Zhang, Y.-D.; Hamza, A.; Mickus, A.; Damaševičius, R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. Sensors 2022, 22, 807. [Google Scholar] [CrossRef]
  87. Balasubramaniam, S.; Velmurugan, Y.; Jaganathan, D.; Dhanasekaran, S. A Modified LeNet CNN for Breast Cancer Diagnosis in Ultrasound Images. Diagnostics 2023, 13, 2746. [Google Scholar] [CrossRef]
  88. Tello-Mijares, S.; Woo, F.; Flores, F. Breast cancer identification via thermography image segmentation with a gradient vector flow and a convolutional neural network. J. Healthc. Eng. 2019, 2019, 9807619. [Google Scholar] [CrossRef]
  89. Alshehri, A.; AlSaeed, D. Breast Cancer Diagnosis in Thermography Using Pre-Trained VGG16 with Deep Attention Mechanisms. Symmetry 2023, 15, 582. [Google Scholar] [CrossRef]
  90. Chatterjee, S.; Biswas, S.; Majee, A.; Sen, S.; Oliva, D.; Sarkar, R. Breast cancer detection from thermal images using a Grunwald-Letnikov-aided Dragonfly algorithm-based deep feature selection method. Comput. Biol. Med. 2022, 141, 105027. [Google Scholar] [CrossRef]
  91. Ciritsis, A.; Rossi, C.; Eberhard, M.; Marcon, M.; Becker, A.S.; Boss, A. Automatic classification of ultrasound breast lesions using a deep convolutional neural network mimicking human decision-making. Eur. Radiol. 2019, 29, 5458–5468. [Google Scholar] [CrossRef]
  92. Becker, A.S.; Mueller, M.; Stoffel, E.; Marcon, M.; Ghafoor, S.; Boss, A. Classification of breast cancer from ultrasound imaging using a generic deep learning analysis software: A pilot study. Br. J. Radiol. 2018, 91, 20170576. [Google Scholar] [CrossRef]
  93. Ahmed, M.; Du, H.; AlZoubi, A. An ENAS based approach for constructing deep learning models for breast cancer recognition from ultrasound images. arXiv 2020, arXiv:2005.13695. [Google Scholar]
  94. Ahmed, M.; AlZoubi, A.; Du, H. Improving generalization of ENAS-based CNN models for breast lesion classification from ultrasound images. In Proceedings of the Medical Image Understanding and Analysis: 25th Annual Conference, MIUA 2021, Oxford, UK, 12–14 July 2021; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 438–453. [Google Scholar]
  95. AlZoubi, A.; Lu, F.; Zhu, Y.; Ying, T.; Ahmed, M.; Du, H. Classification of breast lesions in ultrasound images using deep convolutional neural networks: Transfer learning versus automatic architecture design. Med. Biol. Eng. Comput. 2023, 62, 135–149. [Google Scholar] [CrossRef]
  96. Ahmed, M.; Du, H.; AlZoubi, A. ENAS-B: Combining ENAS with Bayesian Optimization for Automatic Design of Optimal CNN Architectures for Breast Lesion Classification from Ultrasound Images. Ultrason. Imaging 2023, 46, 17–28. [Google Scholar] [CrossRef]
  97. Bouchebbah, F.; Slimani, H. 3D automatic levels propagation approach to breast MRI tumor segmentation. Expert Syst. Appl. 2021, 165, 113965. [Google Scholar] [CrossRef]
  98. Pandey, D.; Wang, H.; Yin, X.; Wang, K.; Zhang, Y.; Shen, J. Automatic breast lesion segmentation in phase preserved DCE-MRIs. Health Inf. Sci. Syst. 2022, 10, 9. [Google Scholar] [CrossRef]
  99. Chen, M.; Zheng, H.; Lu, C.; Tu, E.; Yang, J.; Kasabov, N. A spatio-temporal fully convolutional network for breast lesion segmentation in DCE-MRI. In Neural Information Processing; Cheng, L., Leung, A.C.S., Ozawa, S., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 358–368. [Google Scholar] [CrossRef]
  100. Benjelloun, M.; Adoui, M.E.; Larhmam, M.A.; Mahmoudi, S.A. Automated breast tumor segmentation in DCE-MRI using deep learning. In Proceedings of the 2018 4th International Conference on Cloud Computing Technologies and Applications, Cloudtech, Brussels, Belgium, 26–28 November 2018; pp. 1–6. [Google Scholar] [CrossRef]
  101. El Adoui, M.; Mahmoudi, S.A.; Larhmam, M.A.; Benjelloun, M. MRI breast tumor segmentation using different encoder and decoder CNN architectures. Computer 2019, 8, 52. [Google Scholar] [CrossRef]
  102. Lu, W.; Wang, Z.; He, Y.; Yu, H.; Xiong, N.; Wei, J. Breast cancer detection based on merging four modes MRI using convolutional neural networks. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Brighton, UK, 12–17 May 2019; pp. 1035–1039. [Google Scholar] [CrossRef]
  103. Santucci, D.; Faiella, E.; Gravina, M.; Cordelli, E.; de Felice, C.; Zobel, B.B.; Iannello, G.; Sansone, C.; Soda, P. CNN-based approaches with different tumor bounding options for lymph node status prediction in breast DCE-MRI. Cancers 2022, 14, 4574. [Google Scholar] [CrossRef]
  104. Sechopoulos, I.; Teuwen, J.; Mann, R. Artificial intelligence for breast cancer detection in mammography and digital breast tomosynthesis: State of the art. Semin. Cancer Biol. 2021, 72, 214–225. [Google Scholar] [CrossRef] [PubMed]
  105. Skaane, P.; Bandos, A.I.; Niklason, L.T.; Sebuødegård, S.; Østerås, B.H.; Gullien, R.; Gur, D.; Hofvind, S. Digital Mammography versus Digital Mammography Plus Tomosynthesis in Breast Cancer Screening: The Oslo Tomosynthesis Screening Trial. Radiology 2019, 291, 23–30. [Google Scholar] [CrossRef] [PubMed]
  106. Lotter, W.; Diab, A.R.; Haslam, B.; Kim, J.G.; Grisot, G.; Wu, E.; Wu, K.; Onieva, J.O.; Boyer, Y.; Boxerman, J.L.; et al. Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nat. Med. 2021, 27, 244–249. [Google Scholar] [CrossRef]
  107. Zhang, Q.; Song, S.; Xiao, Y.; Chen, S.; Shi, J.; Zheng, H. Dual-mode artificially-intelligent diagnosis of breast tumours in shear-wave elastography and B-mode ultrasound using deep polynomial networks. Med. Eng. Phys. 2019, 64, 1–6. [Google Scholar] [CrossRef]
  108. Adachi, M.; Fujioka, T.; Mori, M.; Kubota, K.; Kikuchi, Y.; Xiaotong, W.; Oyama, J.; Kimura, K.; Oda, G.; Nakagawa, T.; et al. Detection and Diagnosis of Breast Cancer Using Artificial Intelligence Based Assessment of Maximum Intensity Projection Dynamic Contrast-Enhanced Magnetic Resonance Images. Diagnostics 2020, 10, 330. [Google Scholar] [CrossRef]
  109. Zhou, L.-Q.; Wu, X.-L.; Huang, S.-Y.; Wu, G.-G.; Ye, H.-R.; Wei, Q.; Bao, L.-Y.; Deng, Y.-B.; Li, X.-R.; Cui, X.-W.; et al. Lymph Node Metastasis Prediction from Primary Breast Cancer US Images Using Deep Learning. Radiology 2020, 294, 19–28. [Google Scholar] [CrossRef]
  110. Chan, H.-P.; Samala, R.K.; Hadjiiski, L.M. CAD and AI for breast cancer—Recent development and challenges. Br. J. Radiol. 2019, 93, 20190580. [Google Scholar] [CrossRef] [PubMed]
  111. Morgan, M.B.; Mates, J.L. Applications of Artificial Intelligence in Breast Imaging. Radiol. Clin. N. Am. 2021, 59, 139–148. [Google Scholar] [CrossRef] [PubMed]
  112. Quellec, G.; Lamard, M.; Cozic, M.; Coatrieux, G.; Cazuguel, G. Multiple-Instance Learning for Anomaly Detection in Digital Mammography. IEEE Trans. Med. Imaging 2016, 35, 1604–1614. [Google Scholar] [CrossRef]
  113. Mendelson, E.B. Artificial Intelligence in Breast Imaging: Potentials and Limitations. Am. J. Roentgenol. 2019, 212, 293–299. [Google Scholar] [CrossRef]
  114. Mohamed, A.A.; Luo, Y.; Peng, H.; Jankowitz, R.C.; Wu, S. Understanding Clinical Mammographic Breast Density Assessment: A Deep Learning Perspective. J. Digit. Imaging 2018, 31, 387–392. [Google Scholar] [CrossRef]
  115. Kooi, T.; Litjens, G.; van Ginneken, B.; Gubern-Mérida, A.; Sánchez, C.I.; Mann, R.; den Heeten, A.; Karssemeijer, N. Large scale deep learning for computer aided detection of mammographic lesions. Med. Image Anal. 2017, 35, 303–312. [Google Scholar] [CrossRef]
  116. Kim, J.; Kim, H.J.; Kim, C.; Kim, W.H. Artificial intelligence in breast ultrasonography. Ultrasonography 2021, 40, 183–190. [Google Scholar] [CrossRef]
Figure 1. Overview of ML for breast cancer.
Figure 1. Overview of ML for breast cancer.
Tomography 10 00055 g001
Figure 2. (a) A sample from the DDSM dataset [26], and (b) a sample of breast ultrasound images from the dataset [34].
Figure 2. (a) A sample from the DDSM dataset [26], and (b) a sample of breast ultrasound images from the dataset [34].
Tomography 10 00055 g002
Figure 3. Proposed method for breast cancer segmentation from thermography images [88].
Figure 3. Proposed method for breast cancer segmentation from thermography images [88].
Tomography 10 00055 g003
Figure 4. ENAS-B framework proposed in [96] for automatically designing a CNN model for breast cancer classification from ultrasound images.
Figure 4. ENAS-B framework proposed in [96] for automatically designing a CNN model for breast cancer classification from ultrasound images.
Tomography 10 00055 g004
Table 2. Overview of key AI studies using ultrasound.
Table 2. Overview of key AI studies using ultrasound.
Ref.MethodApplicationDataset SizeAccuracy
[71]PBAC + DFCNSegmentationD1 570 cases, D2 128 cases88.97%
[72]Multi U-Net Segmentation433 cases82%
[74]RGIDetection757 cases-
[75]YoLov3Detection340 cases76%
[76]R-CNN, FastR-CNN, YoLov3, SSDDetection1043 cases87.5%
[77]Faster-RCNN with Inception-ResNet-v2DetectionD-A 306 cases, D-B 163 cases-
[78]Region proposal algorithm and bounding box regressionDetection697 cases-
[79]Yolo v7Detection 655 cases 95%
[80]CNN transfer learningClassification 250 cases96.82%
[81]U-Net + SKSegmentation893 cases82.6%
[82]T2B and B2TSegmentation UDIAT 163 cases, BUSIS 184 cases 96%
[83]Handcrafted MLClassification160 cases89.4%
[84]CNN Classification227 cases93.4%
[85]CNN transfer learning + (RGW)Classification200 cases99.1%
[86]CNNClassification343 cases89.91%
[87]dCNN Classification780 cases 93.1%
[93]ENAS based CNNClassification524 cases89.3%
[94]Auto-Search CNNClassification2167 cases 85.8%
[95]Auto-Search CNN and TLClassification3034 cases83.33%
[76]ENAS-Bayesian-CNN searchClassification2624 cases79.4%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al-Karawi, D.; Al-Zaidi, S.; Helael, K.A.; Obeidat, N.; Mouhsen, A.M.; Ajam, T.; Alshalabi, B.A.; Salman, M.; Ahmed, M.H. A Review of Artificial Intelligence in Breast Imaging. Tomography 2024, 10, 705-726. https://doi.org/10.3390/tomography10050055

AMA Style

Al-Karawi D, Al-Zaidi S, Helael KA, Obeidat N, Mouhsen AM, Ajam T, Alshalabi BA, Salman M, Ahmed MH. A Review of Artificial Intelligence in Breast Imaging. Tomography. 2024; 10(5):705-726. https://doi.org/10.3390/tomography10050055

Chicago/Turabian Style

Al-Karawi, Dhurgham, Shakir Al-Zaidi, Khaled Ahmad Helael, Naser Obeidat, Abdulmajeed Mounzer Mouhsen, Tarek Ajam, Bashar A. Alshalabi, Mohamed Salman, and Mohammed H. Ahmed. 2024. "A Review of Artificial Intelligence in Breast Imaging" Tomography 10, no. 5: 705-726. https://doi.org/10.3390/tomography10050055

Article Metrics

Back to TopTop