Next Article in Journal
Pseudo-Signal Interference Regularity of Single-Frequency Electromagnetic Radiation to Stepped-Frequency Radar
Next Article in Special Issue
A Novel Anomaly Detection System on the Internet of Railways Using Extended Neural Networks
Previous Article in Journal
Development and Analysis of a Novel High-Gain CUK Converter Using Voltage-Multiplier Units
Previous Article in Special Issue
A Comparative Study of Software Defined Networking Controllers Using Mininet
 
 
Correction published on 8 December 2022, see Electronics 2022, 11(24), 4084.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Hybrid Deep Learning Model for Breast Cancer Detection

1
School of Mathematical Science, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
3
Software College, Northeastern University, Shenyang 110169, China
4
College of Life Sciences, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
5
Department of Information Systems, Faculty of Computing and Information Technology (FCIT), King Abdulaziz University, Jeddah 21589, Saudi Arabia
6
Department of Mathematical Sciences, Faculty of Applied Science, Umm Al-Qura University, Makkah 21955, Saudi Arabia
7
Faculty of Computers and Artificial Intelligence, Damietta University, Damietta 34517, Egypt
8
Department of Computer Engineering and Networks, College of Engineering at Wadi Addawasir, Prince Sattam Bin Abdulaziz University, Al Kharj 11991, Saudi Arabia
9
Faculty of Pharmacy, Gomal University, Dera Ismail Khan 29111, Pakistan
10
Electrical Engineering Department, Faculty of Engineering & Technology, Future University, New Cairo 11845, Egypt
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(17), 2767; https://doi.org/10.3390/electronics11172767
Submission received: 27 July 2022 / Revised: 19 August 2022 / Accepted: 24 August 2022 / Published: 2 September 2022 / Corrected: 8 December 2022
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)

Abstract

:
Breast cancer (BC) is a type of tumor that develops in the breast cells and is one of the most common cancers in women. Women are also at risk from BC, the second most life-threatening disease after lung cancer. The early diagnosis and classification of BC are very important. Furthermore, manual detection is time-consuming, laborious work, and, possibility of pathologist errors, and incorrect classification. To address the above highlighted issues, this paper presents a hybrid deep learning (CNN-GRU) model for the automatic detection of BC-IDC (+,−) using whole slide images (WSIs) of the well-known PCam Kaggle dataset. In this research, the proposed model used different layers of architectures of CNNs and GRU to detect breast IDC (+,−) cancer. The validation tests for quantitative results were carried out using each performance measure (accuracy (Acc), precision (Prec), sensitivity (Sens), specificity (Spec), AUC and F1-Score. The proposed model shows the best performance measures (accuracy 86.21%, precision 85.50%, sensitivity 85.60%, specificity 84.71%, F1-score 88%, while AUC 0.89 which overcomes the pathologist’s error and miss classification problem. Additionally, the efficiency of the proposed hybrid model was tested and compared with CNN-BiLSTM, CNN-LSTM, and current machine learning and deep learning (ML/DL) models, which indicated that the proposed hybrid model is more robust than recent ML/DL approaches.

1. Introduction

Breast cancer is recognized by developing tissues in breast cells and is considered one of the most common cancers in women worldwide after lung cancer [1], especially in America, each year approximately 30% of new cases of females have been diagnosed with breast cancer. The death rate is 190 per 100,000 women every year [2]. The two most common types of BC are ductal carcinoma in situ (DCIS), and invasive ductal carcinoma (IDC) [3]. The DCIS detected cases are a very small percentage only 2% of BC patients. Furthermore, IDC is dangerous, because of encompasses full breast cells. This category includes 80% of BC patients and the death rate is 10% per 100 [4].
Moreover, IDC exists in different kinds of cells, which is almost impossible to diagnose and detect. Abnormal cells also called a tumor, have irregular, and arbitrary shapes. Furthermore, these tumor cells are categorized into two main types: malignant and benign [5]. Initially, the malignant tumor spreads in its surrounding tissue cells and creates difficulty for healthy tissue cells from developing. Unlike the first, the second is a non-cancerous tissue cell and does not disturb its neighbor tissues. Even the most experienced pathologists [6] reported difficulty in diagnosing differences in tissue structure in the case of BC detection. Furthermore, the minor changes within these groups need distinct medical procedures. They may be paired with various therapies, such as surgery, radiation, and oral doses of medication.
Therefore, such findings affect one’s emotional and financial condition [7]. Furthermore, early identification of BC-IDC tissues is very significant. Hence, identifying tumor patterns from the visual perception of mammography BC-IDC tissue images is a highly labor-intensive and time-consuming task for pathologists [8]. To assist pathology experts in identifying invasive BC, developing an automated computer-aided system is urgently needed. It will reduce pathologists’ time and efforts to analyze histopathology images [7,8,9]. ML/DL is widely used in invasive breast cancer detection. However, most of the researchers rely on a single deep learning model CNN, LSTM, RNN [10,11,12,13,14,15,16,17,18,19,20,21,22,23], etc. As a result, the performance of these models was found unsatisfactory. It is always competent to use hybrid DL models to improve classification performance [23]. In order to address the above-mentioned important issues, this research proposed a hybrid DL(CNN-GRU) model to improve the classification performance and efficiency detect BC-IDC tissues.
The following main contributions of this research are as follows.
  • In this research, a new hybrid DL(CNN-GRU) model is presented that automatically extracts BC-IDC (+,−) features and classifies them into IDC (+) and IDC (−) from histopathology images to reduce the pathologist’s error.
  • The hybrid DL model (CNN-GRU) is proposed to efficiently classify IDC breast cancer detection in clinical research.
  • In the evaluation process of the proposed CNN-GRU model, we have compared the key performance measure (Acc (%), Prec (%), Sens (%), and Spec (%), F1-score, and AUC with the current ML/DL model implemented the same dataset (Kaggle). In order to find the classification performance of the hybrid models. It is found that the proposed hybrid model has impressive classification outcomes compared to other hybrid DL models.
Furthermore, the structure and organization of the paper are as follows; a comprehensive explanation of the BC-IDC dataset has presented in Section 1, while Section 2 provides a full discussion of the proposed model structure and the data pre-processing. The experimental study of the models is shown in Section 3, and the comparative results and discussion are presented in Section 4, including the discussion, conclusion, and future study.

2. Related Works

Various literature was introduced using deep learning models (DL) for breast cancer detection models, such as deep convolutional neural networks (DCNNs) with transfer learning techniques (TLs) [22,23,24,25,26], deep belief networks (DBN), and convolutional neural networks (CNN) [27,28,29,30]. Some of these efficient models, particularly DL have the potential to increase the detection efficiency and accuracy of BC detection [31,32].
Medical diagnosis research is not only limited to the CNNs model for the extraction of features from imaging but also includes other types of models [33]. Wahab et al. [34] introduced multi- fused-CNN (MF-CNN) for BC detection.
The results demonstrate that suitable color and textural qualities might help identify ROIs based on the mitotic count at a lower spatial resolution. CNNs allow for exploring hitherto unthinkable possibilities in domains difficult for specialists to build effective imaging features. The research of Gravina et al. [35] used CNNs that had no effect when “cancer images are high dimensional than simple images.” Breast cancer types, such as lesion segmentation, were presented as useful sources of information. It can be used to extract shape-related features and pinpoint the specific location on mammography images. In their research, Tsochatzidis et al. [36] experimented with and examined mammography images’ accuracy in detecting BC. They implemented different mammographic mass datasets such as DDSM-400 and CBIS-DDSM have different key performance measures (accuracies (70% and 73%), furthermore the segmentation maps were compared to one another to check the performance of the proposed model. Malathi et al. [37] adapted a computer-aided diagnostic (CAD) system for mammograms to enable early detection, assessment, and diagnosis of breast cancer during the screening process. They spoke about the possibility of developing a breast CAD structure that is based on CNN’s distinctive fusion and deep learning (DL) techniques. The outcome demonstrates that the random forest algorithm (RFA) had the best accuracy of 78%, with less error than the CNN model. The abnormality of breast tissue is explored using the deep belief network (DBN). Desai et al. [38] experimented on every network’s design and operation. The analysis was then carried out on the performance metrics of the accuracy (79%), and the network diagnoses and categorizes BC to determine whether the network surpasses the others. When it comes to identifying and detecting BC-IDC detection, the CNN model is shown to have greater accuracy than MLP in certain cases. Wahab et al. [34] conducted a previous study investigating the automated identification of BC-IDC type using CNNs. Several researchers employed automatic identification approaches-based ML to detect the same thing. It acquired accurate findings and reduced the number of errors discovered during the diagnostic process. When utilizing the provided dataset, the research of D.Abdelhafiz [39] revealed that the augmentation approaches with the DL model accurately classify BC. Another study [40] used max pooling, at its deepest CNNs used, to accurately classify mitosis images of breast cancer.
The networks managed and organized the proposed pixel-by-pixel method to classify and examined the IDC tissue zones. Murtaza et al. [41] used DL methods to accurately detect cancer. Hossain et al. [13] proposed context-aware stacked CNNs for detecting IDC and DCIS using whole slide images (WSIs). They attained an area under the curve of 0.72 while categorizing nonmalignant and malignant slides. The system achieved a three-class performance accuracy of up to 76.2% for the WSI classification, suggesting its potential in routine diagnostics. Alamid and Qian et al. [42,43] described various approaches for identifying BC in their respective studies. The findings of their experiments demonstrated that the amplitude and phase of the shearlet coefficients might be used to improve detection performance and generalizability. Some earlier research [1,33,34,41] advocated using artificial intelligence (AI) and CNN for cancer image identification and healthcare monitoring. However, the accuracy percentage for a medical-side solution was too low [44,45], with a rate of roughly 60% for full class detection and 75% for just mass class detection. The accuracy of all arguments may be refined even more to get a more favorable outcome [46,47]. The study aimed to improve the precision level of the diagnosis of breast cancer.
In the DL, CNN is the most popular DL model because it can extract a rich set of features by applying various filters belonging to the convolutional layers, along with fully connected (FC) and pooling layers [48]. Additionally, CNN is unable to retain the memory of prior time series patterns; as a result, it has a tough time immediately learning the features of BC-IDC (+,−) that are considered to be the most significant and indicative of the disease [49]. Hence, the GRU network layer is concatenated with the CNN model to address the above issues, which improves the classification performance of BC-IDC (+,−), furthermore, it also stored the previous series pattern of data storage. This research aims to reduce pathology errors in the diagnosis process and automate detecting BC-IDC (+,−) tissue [49,50,51,52,53,54,55]. Table 1 presents various existing literature about BC detection using DL models.

3. Materials and Methods

3.1. The Framework of Predicting BC-IDC Detection

The whole process of BC-IDC (+,−) detection implementing the proposed CNN-GRU model are described as follows:
Two key phases are required to perform the breast cancer (IDC tissues) detection process: data collecting and pre-processing (labeling and resizing), as shown in Figure 1. While the other is to analyze the data using the proposed CNN-GRU model for further detection.

3.2. Data Collection and Class Label

In this study, a publicly accessible dataset was obtained from the well-known Kaggle website (http://Kaggle.com (accessed on 10 March 2020) [56]. While the full dataset comes from the research [57], including 162 women diagnosed with IDC at the Hospital of the University of Pennsylvania. The dataset contains high-resolution pathologist images (2040 × 1536 pixels). To maintain consistency, each slide was scanned at a resolution of 0.25 micro/pr and 277,524 small images were extracted from the original dataset. There was a total of 277,524 images obtained, 78,786 of which were IDC (+) samples presented as 0 labels, and 198,738 non-IDC (−) were labeled 1, as given in Figure 2.

3.3. Data Pre-Processing

The pre-processing is the most important step for the best classification results. It is often performed on data before classification to ensure that the required results are obtained. Pre-processing strategies for the breast cancer dataset are being investigated to improve the detection model accuracy, less computational time, and speed up the training process. Additionally, by normalizing the data, the optimizer may achieve a mean (µ) = 0 and furthermore the standard deviation (σ) = 1, allowing it to converge faster. The Kaggle data were split into test data 20% of the total images, while the training data used the remaining 80%. To avoid the overfitting problem, the other process needs a validation set. Another issue of unequal distribution of the dataset classes. While, this proposed that the quantity of data of the benign type is around 3 times more than the malignant category, affecting CNN’s performance. The oversampling approach SMOTE (synthetic minority oversampling technique) is used to balance the samples and decrease the overfitting issues. Random cropping was also used in this research, one of the important steps of pre-processing. Figure 3 presents the IDC class distribution.

3.4. Random Cropping

To handle the BC dataset, another pre-processing approach (random cropping) was used in conjunction with convolution neural networks. This technique is arbitrarily cropping different areas of large images to maximize the amount of available data for CNNs the random cropping is given in Figure 4.
After pre-processing techniques, the data is delivered into the proposed approach in the following section (CNN-GRU); in this part, the CNN and GRU. Models for IDC (+,−) breast cancer detection is discussed briefly.

3.5. Convolutional Neural Networks (CNN)

CNNs are used to find images pattern and have several front layers of CNNs, the network can detect lines and corners. We can, however, transfer these patterns down through our neural network and try to identify more distinctive features as we progress deeper into the network [48]. The CNN model is extremely efficient for image feature extraction. Additionally, according to the researchers, the proposed CNNs model efficiently identifies BC from breast tissue images. The structure of the CNN is consisting of three main layers: the pooling, convolutional layer (CLs), and, fully connected layers (FCs). The CLs are responsible for calculating the outcome of neurons connected to local points. It is determined by considering the dot product of the weights and region. In the case of the input images, the typical filters consist of a small area (3 × 3 to 8 × 8) pixels. Such filters can scan the images by sliding a window over the image and automatically controlling the recurrent patterns that appear in any image region during the scanning process. The stride is the distance between filters in a chain. They extend the convolution to include windows that overlap if the stride set of parameters is less than any of the filter dimensions. Figure 5 presents the main architecture of the CNNs.

3.6. Gated Recurrent Unit Network (GRU)

In RNN, the GRU network model is implemented most of the time in research articles to handle the problem of vanishing gradient [58] presented in Figure 6, the GRU is more effective than the LSTM because it included three primary gates, none of which contained an internal cell state. Within the GRU, the information is kept in a concealed format for protection. The forward and backward information is offered jointly to update gate (z). Furthermore, previous information is stored in the reset gate (r).
While the current memory gate takes advantage of the reset gate to save and maintain the essential information that was present in the system in its prior state. It is possible to incorporate nonlinearity into the input by using the input modulation gate while simultaneously giving it the properties of a zero mean. This is accomplished in a two-fold manner. The mathematical expression of the basic GRU gates is as follows:
  L t = σ Z t · K xr + R t 1 · W hr + C r
M t = σ Z t · K xz + R t 1 · W hz + C z
where Kxr and Kxz present weight parameters, while the Cr, Cz are biased.

3.7. CNN-GRU

The CNN-GRU model consisted of 4 convolution layer (CLs), 3 max-pooling layers, and 3 fully connected layers (FC). The activation function (rectified linear units (ReLUs)) were implemented because it may not activate every neuron simultaneously, which allows the model to perform better and learn faster. Initially, the input image data were given with the dimensions size of (50, 50, and 3) in to CLs. This meant that the image’s height, width was 50 pixels, while the channels were 3. The CNN-GRU model requires features to extract by passing through the CLs 1 In this particular instance, the feature map output shape was 128. In addition, the parameters were set such that the stride was 1 and the kernel size (3 × 3) of the CLs1, respectively. The ReLUs were used with CLs1 to decrease the nonlinearity dimension. After the first CLs 1, the output shape was 128 feature maps the size of (50, 50). Furthermore, the pooling layer decreases the parameter of training to (48, 48). To avoid the model from overfitting issues, the training parameter (48, 48, 128) was carried over from the dropout layer after the pooling layer.
Initially, the dropout of the convolutional layer was 0.3. An additional dropout of 0.9 was applied in the first two fully linked layers to overcome the problem of overfitting. After each max pooling and CLs, the training parameter dramatically dropped, followed by ReLUs and drop out. After that training process, the data need to be combined into an I-D array to utilize as input for FC layer implementation. Flatten was used to create a features map (512) and training parameter (32, 32) size. After completing the whole process of 2D (dimensional) convolutional layers, the dropout was employed to generate 256 feature maps. A GRU model used the FC layer of 512 neurons to tackle the vanishing gradient issue. After this process, two FLs were also utilized. Finally, the SoftMax performed the operations of the binary classification as presents in Table 2 and Figure 7.
Figure 8 shows the flowchart from the pre-processing step to classification of the proposed model; Initially, the Kaggle dataset was split into testing and training sample data, then the proposed CNN-GRU is trained by input image data with various training parameters and filtered the important features; after feature extraction, the proposed model is tested using different key performance measure of BC-IDC (+,−) classification.

4. Experimental Setup

For this experiment, we utilized an Intel Core i7 CPU and an NVIDIA graphics processing unit (GPU). The recommended model was also trained by Keras and Python 3.7 programming environments. Table 3 provides details of the software and hardware specifications.

5. Performance Metrics

The following performance metrics were considered and computed to test the CNN-GRU model to properly classify BC-IDC (+,−) tissue.
  • True positive (TP): positive IDC (+) samples were predicted.
  • True negative (TN): refers to negative IDC (−) tissue samples found to be negative.
  • False positive (FP): negative IDC (−) samples that are predicted to be positive IDC (+).
  • False negative (FN): positive IDC (+) samples are predicted IDC (−).
The following are the mathematical expression of the accuracy (Acc), precision (Prec), sensitivity (Sens), and specificity (spec), F1 score and most importantly Matthew’s correlation coefficient (MCC) and AUC, which were used as a performance indicator to detect breast IDC cancer in most cases.
Acc % = TP + TN   TP + TN + FP + FN
Prec % = TP TP + FP  
Sen % = TP TP + FN
Spec % = TN TN + FP
F 1 Score % = 2 Sens Prec Sens + Prec  
MCC % = TP   ×   TN     FP   ×   FN   TP   +   FP TP   +   FN TN   +   FP TN   +   FN  

6. Result and Discussion

The experimental study was conducted by three hybrid DL models such as CNN-LSTM, CNN-BiLSTM, and the suggested CNN-GRU model. These models’ results were compared using a testing dataset.

6.1. Analysis of Performance Measure (Acc, Pres, Sens, Spec, F1 Score, and AUC)

When evaluating the effectiveness of a certain classifier, accuracy is one of the most important factors to take into account. Furthermore, precision (Prec) is defined as the degree of accuracy that may be quantified based on real-time prediction. F1 score may be used interchangeably with “TPR,” and it investigated many IDC scenarios in previous literature. A reasonable metric that reveals the robustness of an IDC breast cancer architecture is the F1 score. Furthermore, AUC presents the ratio to distinguish between the classes. Based on the aforementioned performance indicator, the proposed model was tested and compared with CNN-BiLSTM, and CNN-LSTM of the BC-IDC (+,−) detection. The CNN-GRU model performed better. Because GRU can be modified easily and does not need memory units, there are few parameters to train the model.
The proposed method attained an Acc of 86%, Prec of 85%, Sens of 85%, an F1 score of 86%, and AUC of 0.89, respectively. Figure 9 contains the all-performance measurement indicator analysis that was performed during predicting.

6.2. Confusion Matrix

In order to check the classification performance of the model we implemented confusion matrix. The confusion matrix actually classifies the BC-IDC (+,−). Additionally, the CNN-GRU is checked and evaluated on this classification measure scale, also compared with CNN-LSTM and CNN-BiLSTM models. While the performance of the CNN-GRU is superior to other hybrid models and accurately classifies BC-IDC (+,−), as presents in Figure 10.

6.3. ROC Curve Analysis

The receiver operating characteristic (ROC) curve is a graph that presents the classification performance of the models along with the given total classification thresholds. The ROC curve is the visual comparative plots of true positive rates (TPR) on the Y-axis and on the X-axis false positive rates (FPR). Figure 11 presents the ROC of the CNN-GRU, along with CNN-BiLSTM, and CNN-LSTM models, showing that the proposed methods performed better classification than another hybrid model.

6.4. FNR, FOR, FPR, and FDR Analysis

The proposed IDC breast cancer detection approach can be further investigated by extensive key performance metrics composed of false omission rate (FOR), FPR, FNR, and false detection rate (FDR). The CNN-GRU model performed better than CNN-LSTM, and CNN-BiLSTM with 0.0030 FPR, 0.0024 FOR, 0.0012 FNR, and 0.0013 FDR as shown in Figure 12.

6.5. Evaluation of TNR, TPR and MCC

To evaluate the performance, of the proposed hybrid model through analysis, a confusion matrix technique is implemented to identify the TNR, TPR, and MCC values. Figure 13 presents the TPR, TNR and MCC, which are 86%, 84%, and 85.5%. The proposed CNN-GRU model has the best outcomes as compared to other hybrid models.

6.6. Model Efficiency

The time complexity (ms) is measuring the training time of the model during the process of classification. The fact that most of the training was completed offline was not considered during the experiment. As shown in Figure 14, the proposed CNN-GRU has a training time of 4.4 milliseconds, which is much less than the training times of CNN-BiLSTM and CNN-LSTM, which are 6.4 and 7.4 ms, respectively.

6.7. Comprative Anaylis Considering Proposed Hybird Alogerthm with ML/DL Exting Model

To further investigate the performance of the proposed hybrid model classification of IDC (+,−), we compared and correlated with the best DL model i.e., LSTM, CNN, DNN, and BiLSTM using key performance measures (Acc, Pres, Sens, Spec, and F1 score). The CNN-GRU model performed phenomenal classification measures as compared to these models. The LSTM has the least key performance matrix in IDC (+,−) detection, as presents in Figure 15. Furthermore, the proposed hybrid algorithm was also compared with exiting ML/DL for BC-IDC (+,−) tissue classification.
In order to expand the scope of the CNN-GRU validation, a complete performance comparison was made between the CNN-GRU and several existing ML/DL frameworks from the research literature. This was executed in order to widen the validation scope. CNN-GRU attained an outstanding performance on all of the performance metrics that were listed above by drubbing the existing literature. A comparative investigation can be found in Table 4, which provides a summary. Furthermore, the proposed hybrid model has some disadvantages; In the training process, the proposed model needed high computing resources and specialized hardware, good GPU.

7. Conclusions and Future Work

The aim of automatic detection of BC- IDC (+,−) tissue is to improve the treatment of patients, which is very difficult to diagnose in early-stage detection. A CNN-GRU method is proposed in the present work, which examined the BC-IDC tissue areas in WSIs for automated detection and classification. In this research study, a proposed model automatically implemented different layers’ architectures to detect breast cancer (IDC tissues). The validation tests for quantitative results were carried out using each methodology’s key performance indicators (Acc (%), Pres (%), Sens (%), Spec (%), AUC and F1-score (%). The proposed system successfully produced an Acc of 86.21%, Prec of (85.90%), Sens (85.71%), Spec (84.51%), F1 score (88%) and AUC (0.89), which can reduce the pathologist error and efforts during the clinical process. Furthermore, the result of the proposed model was compared with CNN-BiLSTM, CNN-LSTM, and other existing ML/DL, which indicated that CNN-GRU has 4 to 5% high accuracy as well as Pres (%), Sens (%), Spec (%), AUC, F1-score (%), and less time complexity (ms). In this research, the fundamental constraint is using a secondary database, such as Kaggle. Future studies should be conducted using primary data to improve the accuracy of findings linked to BC detection.

Author Contributions

Conceptualization, X.W. and I.A.; data curation, S.A.Z.; formal analysis, I.A. and D.J.; funding acquisition, E.T.E.; investigation, X.W., M.E.G. and J.A.; methodology, I.A., D.J. and S.A.Z.; project administration, Y.I.D.; resources, F.M.A. and Y.I.D.; validation, M.E.G.; visualization, F.M.A.; writing—original draft, D.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was supported and funding by the Future University researchers have the project number (FUE-2022/120), Future University—Egypt.

Data Availability Statement

This research used the dataset for investigation and analysis will be available based on corresponding author permission.

Acknowledgments

The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: 22UQU4331317DSR001.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

All the abbreviations of this research are as follows:
IDCInvasive ductal carcinoma
MLMachine learning
DLDeep learning
IDCInvasive ductal carcinoma
DCISDuctal carcinoma in situ
BCWBreast cancer Wisconsin
WSIWhole slide images
CNNConvolutional neural network
LSTMLong short-term memory
GRUGated recurrent unit
BiLSTMBidirectional long short-term memory
DNNDeep neural network
GPUGraphics processing unit

References

  1. Faruqui, N.; Yousuf, M.A.; Whaiduzzaman, M.; Azad, A.K.M.; Barros, A.; Moni, M.A. LungNet: A hybrid deep-CNN model for lung cancer diagnosis using CT and wearable sensor-based medical IoT data. Comput. Biol. Med. 2021, 139, 104961. [Google Scholar] [CrossRef] [PubMed]
  2. Adrienne, W.G.; Winer, E.P. Breast cancer treatment: A review. JAMA 2019, 321, 288–300. [Google Scholar]
  3. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [PubMed]
  4. Siegel, R.; Kimberly, D.M. Cancer statistics, 2018. CA A Cancer J. Clin. 2018, 68, 7–30. [Google Scholar] [CrossRef] [PubMed]
  5. Aly, G.H.; Marey, M.; El-Sayed, S.A.; Tolba, M.F. YOLO based breast masses detection and classification in full-field digital mammograms. Comput. Methods Programs Biomed. 2021, 200, 105823. [Google Scholar] [CrossRef]
  6. Khamparia, A.; Bharati, S.; Podder, P.; Gupta, D.; Khanna, A.; Phung, T.K.; Thanh, D.N.H. Diagnosis of breast cancer based on modern mammography using hybrid transfer learning. Multidimens. Syst. Signal Processing 2021, 32, 747–765. [Google Scholar] [CrossRef]
  7. Naik, S.; Doyle, S.; Agner, S.; Madabhushi, A.; Feldman, M.; Tomaszewski, A.J. Automated Gland and Nuclei Segmentation for Grading of Prostate and Breast Cancer Histopathology. In Proceedings of the 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Paris, France, 14–17 May 2008; pp. 284–287. [Google Scholar]
  8. Edge, S.; Byrd, D.R.; Compton, C.C.; Fritz, A.G.; Greene, F.; Trotti, A. AJCC Cancer Staging Handbook, 7th ed.; Springer: New York, NY, USA, 2010. [Google Scholar]
  9. Dundar, M.M.; Badve, S.; Bilgin, G.; Raykar, V.; Jain, R.; Sertel, O.; Gurcan, M.N. Computerized classification of intraductal breast lesions using histopathological images. IEEE Trans. Biomed. Eng. 2011, 58, 1977–1984. [Google Scholar] [CrossRef]
  10. Aloysius, N.; Geetha, M. A review on deep convolutional neural networks. In Proceedings of the International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 6–8 April 2017; IEEE: Piscataway, NJ, USA; pp. 0588–0592. [Google Scholar]
  11. Ahmad, I.; Ullah, I.; Khan, W.U.; Rehman, A.U.; Adrees, M.S.; Saleem, M.Q.; Cheikhrouhou, O.; Hamam, H.; Shafiq, M. Efficient Algorithms for E-Healthcare to Solve Metaobject Fuse Detection Problem. J. Healthc. Eng. 2021, 2021, 9500304. [Google Scholar] [CrossRef]
  12. Chen, M.; Yang, J.; Hu, L.; Hossain, M.S.; Muhammad, G. Urban healthcare big data system based on crowdsourced and cloud-based air quality indicators. IEEE Commun. Mag. 2018, 56, 14–20. [Google Scholar] [CrossRef]
  13. Hossain, M.S. Cloud-supported cyber-physical localization framework for patients monitoring. IEEE Syst. J. 2017, 11, 118–127. [Google Scholar] [CrossRef]
  14. Alanazi, S.A.; Kamruzzaman, M.M.; Alruwaili, M.; Alshammari, N.; Alqahtani, S.A.; Karime, A. Measuring and preventing COVID-19 using the SIR model and machine learning in smart health care. J. Healthc. Eng. 2020, 2020, 8857346. [Google Scholar] [CrossRef] [PubMed]
  15. Benjelloun, M.; El Adoui, M.; Larhmam, M.A.; Mahmoudi, S.A. Automated breast tumor segmentation in DCE-MRI using deep learning. In Proceedings of the 2018 4th International Conference on Cloud Computing Technologies and Applications (Cloudtech), Brussels, Belgium, 26–28 November 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
  16. Tufail, A.B.; Ma, Y.K.; Kaabar, M.K.; Martínez, F.; Junejo, A.R.; Ullah, I.; Khan, R. Deep learning in cancer diagnosis and prognosis prediction: A minireview on challenges, recent trends, and future directions. Comput. Math. Methods Med. 2021, 2021, 9025470. [Google Scholar] [CrossRef] [PubMed]
  17. Khan, R.; Yang, Q.; Ullah, I.; Rehman, A.U.; Tufail, A.B.; Noor, A.; Cengiz, K. 3D convolutional neural networks based automatic modulation classification in the presence of channel noise. IET Commun. 2021, 16, 497–509. [Google Scholar] [CrossRef]
  18. Tufail, A.B.; Ullah, I.; Khan, W.U.; Asif, M.; Ahmad, I.; Ma, Y.K.; Ali, M. Diagnosis of Diabetic Retinopathy through Retinal Fundus Images and 3D Convolutional Neural Networks with Limited Number of Samples. Wirel. Commun. Mob. Comput. 2021, 2021, 6013448. [Google Scholar] [CrossRef]
  19. Kamruzzaman, M.M. Architecture of smart health care system using artificial intelligence. In Proceedings of the 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
  20. Min, W.; Bao, B.-K.; Xu, C.; Hossain, M.S. Cross-platform multi-modal topic modeling for personalized inter-platform recommendation. IEEE Trans. Multimed. 2015, 17, 1787–1801. [Google Scholar] [CrossRef]
  21. Ahmad, I.; Liu, Y.; Javeed, D.; Shamshad, N.; Sarwr, D.; Ahmad, S. A review of artificial intelligence techniques for selection & evaluation. IOP Conf. Ser. Mater. Sci. Eng. 2020, 853, 012055. [Google Scholar]
  22. Hossain, M.S.; Amin, S.U.; Alsulaiman, M.; Muhammad, G. Applying deep learning for epilepsy seizure detection and brain mapping visualization. ACM Trans. Multimed. Comput. Appl. 2019, 15, 1–17. [Google Scholar] [CrossRef]
  23. Wang, J.L.; Ibrahim, A.K.; Zhuang, H.; Ali, A.M.; Li, A.Y.; Wu, A. A study on automatic detection of IDC breast cancer with convolutional neural networks. In Proceedings of the 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 12–14 December 2018; pp. 703–708. [Google Scholar]
  24. Aurna, N.F.; Abu Yousuf, M.; Abu Taher, K.; Azad, A.; Moni, M.A. A classification of MRI brain tumor based on two stage feature level ensemble of deep CNN models. Comput. Biol. Med. 2022, 146, 105539. [Google Scholar] [CrossRef]
  25. Shayma’a, A.H.; Sayed, M.S.; I Abdalla, M.I.; Rashwan, M.A. Breast cancer masses classification using deep convolutional neural networks and transfer learning. Multimed. Tools Appl. 2020, 79, 30735–30768. [Google Scholar]
  26. Hossain, M.S.; Muhammad, G. Emotion-aware connected healthcare big data towards 5G. IEEE Internet Things J. 2018, 5, 2399–2406. [Google Scholar] [CrossRef]
  27. Mahbub, T.N.; Yousuf, M.A.; Uddin, M.N. A Modified CNN and Fuzzy AHP Based Breast Cancer Stage Detection System. In Proceedings of the 2022 International Conference on Advancement in Electrical and Electronic Engineering (ICAEEE), Gazipur, Bangladesh, 24–26 February 2022; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar]
  28. Pang, H.; Lin, W.; Wang, C.; Zhao, C. Using transfer learning to detect breast cancer without network training. In Proceedings of the 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS), Nanjing, China, 23–25 November 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
  29. Cruz-Roa, A.; Basavanhally, A.; González, F.; Gilmore, H.; Feldman, M.; Ganesan, S.; Madabhushi, A. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. In Medical Imaging 2014: Digital Pathology; SPIE: Bellingham, WA, USA, 2014; Volume 9041, p. 904103. [Google Scholar]
  30. Amin, S.U.; Alsulaiman, M.; Muhammad, G.; Bencherif, M.A.; Hossain, M.S. Multilevel weighted feature fusion using convolutional neural networks for EEG motor imagery classification. IEEE Access 2019, 7, 18940–18950. [Google Scholar] [CrossRef]
  31. Sharma, S.; Aggarwal, A.; Choudary, T. Breast cancer detection using machine learning algorithms. In Proceedings of the International Conference on Computational Techniques, Electronics and Mechanical System (CTEMS), Belgaum, India, 21–22 December 2018; pp. 114–118. [Google Scholar]
  32. Jafarbigloo, S.K.; Danyali, H. Nuclear atypia grading in breast cancer histopathological images based on CNN feature extraction and LSTM classification. CAAI. Trans. Intell. Technol. 2021, 6, 426–439. [Google Scholar] [CrossRef]
  33. Nawaz, M.; Sewissy, A.A.; Soliman, T.H.A. Multi-class breast cancer classification using deep learning convolution neural network. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 316–322. [Google Scholar]
  34. Wahab, N.; Khan, A. Multifaceted fused-CNN based scoring of breast cancer whole-slide histopathology images. Appl. Soft Comput. 2020, 97, 106808. [Google Scholar] [CrossRef]
  35. Gravina, M.; Marrone, S.; Sansone, M.; Sansone, C. DAE-CNN: Exploiting and disentangling contrast agent effects for breast lesions classification in DCE-MRI. Pattern Recognit. Lett. 2021, 145, 67–73. [Google Scholar] [CrossRef]
  36. Tsochatzidis, L.; Koutla, P.; Costaridou, L.; Pratikakis, I. Integrating segmentation information into CNN for breast cancer diagnosis of mammographic masses. Comput. Methods Programs Biomed. 2021, 200, 105913. [Google Scholar] [CrossRef]
  37. Malathi, M.; Sinthia, P.; Farzana, F.; Mary, G.A.A. Breast cancer detection using active contour and classification by deep belief network. Mater. Today Proc. 2021, 45, 2721–2724. [Google Scholar] [CrossRef]
  38. Desai, M.; Shah, M. An anatomization on breast cancer detection and diagnosis employing multi-layer perceptron neural network (MLP) and Convolutional neural network (CNN). Clin. Ehealth 2021, 4, 1–11. [Google Scholar] [CrossRef]
  39. Abdelhafiz, D.; Bi, J.; Ammar, R.; Yang, C.; Nabavi, S. Convolutional neural network for automated mass segmentation in mammography. BMC Bioinform. 2020, 21, 192. [Google Scholar] [CrossRef]
  40. Rezaeilouyeh, H.; Mollahosseini, A.; Mahoor, M.H. Microscopic medical image classification framework via deep learning and shearlet transform. J. Med. Imaging 2016, 3, 044501. [Google Scholar] [CrossRef] [PubMed]
  41. Murtaza, G.; Shuib, L.; Wahab, A.W.A.; Mujtaba, G.; Nweke, H.F.; Al-Garadi, M.A.; Zulfiqar, F.; Raza, G.; Azmi, N.A. Deep learning-based breast cancer classification through medical imaging modalities: State of the art and research challenges. Artif. Intell. Rev. 2020, 53, 1655–1720. [Google Scholar] [CrossRef]
  42. Alhamid, M.F.; Rawashdeh, M.; Al Osman, H.; Hossain, M.S.; El Saddik, A. Towards context-sensitive collaborative media recommender system. Multimed. Tools Appl. 2015, 74, 11399–11428. [Google Scholar] [CrossRef]
  43. Qian, S.; Zhang, T.; Xu, C.; Hossain, M.S. Social event classification via boosted multimodal supervised latent dirichlet allocation. ACM Trans. Multimed. Comput. Commun. Appl. 2015, 11, 1–22. [Google Scholar] [CrossRef]
  44. Singh, D.; Singh, S.; Sonawane, M.; Batham, R.; Satpute, P.A. Breast cancer detection using convolution neural network. Int. Res. J. Eng. Technol. 2017, 5, 316–318. [Google Scholar]
  45. Javeed, D.; Gao, T.; Khan, M.T.; Shoukat, D. A hybrid intelligent framework to combat sophisticated threats in secure industries. Sensors 2022, 22, 1582. [Google Scholar] [CrossRef]
  46. Alhussein, M.; Muhammad, G.; Hossain, M.S.; Amin, S.U. Cognitive IoT-cloud integration for smart healthcare: Case study for epileptic seizure detection and monitoring. Mob. Netw. Appl. 2018, 23, 1624–1635. [Google Scholar] [CrossRef]
  47. Janowczyk, A. Use Case 6: Invasive Ductal Carcinoma (IDC) Segmentation. Available online: http://www.andrewjanowczyk.com/use-case-6-invasive-ductal-carcinoma-idc-segmentation/ (accessed on 10 March 2022).
  48. Khuriwal, N.; Mishra, N. Breast cancer detection from histopathological images using deep learning. In Proceedings of the 3rd International Conference and Workshops on Recent Advances and Innovations in Engineering, Jaipur, India, 22–25 November 2018. [Google Scholar]
  49. Kumar, A.; Sushil, R.; Tiwari, A.K. Comparative study of classification techniques for breast cancer diagnosis. Int. J. Comput. Sci. Eng. 2019, 7, 234–240. [Google Scholar] [CrossRef]
  50. Nallamala, S.H.; Mishra, P.; Koneru, S.V. Breast cancer detection using machine learning way. Int. J. Recent Technol. Eng. 2019, 8, 1402–1405. [Google Scholar]
  51. Abdolahi, M.; Salehi, M.; Shokatian, I.; Reiazi, R. Artificial intelligence in automatic classification of invasive ductal carcinoma breast cancer in digital pathology images. Med. J. Islamic Repub. Iran 2020, 34, 140. [Google Scholar] [CrossRef]
  52. Weal, E.F.; Amr, S.G. A deep learning approach for breast cancer mass detection. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 175–182. [Google Scholar]
  53. Mekha, P.; Teeyasuksaet, N. Deep learning algorithms for predicting breast cancer based on tumor cells. In Proceedings of the 4th International Conference on Digital Arts, Media and Technology and 2nd ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunication Engineering, Nan, Thailand, 30 January–2 February 2019; pp. 343–346. [Google Scholar]
  54. Kavitha, T.; Mathai, P.P.; Karthikeyan, C.; Ashok, M.; Kohar, R.; Avanija, J.; Neelakandan, S. Deep learning based capsule neural network model for breast cancer diagnosis using mammogram images. Interdiscip. Sci. Comput. Life Sci. 2022, 14, 113–129. [Google Scholar] [CrossRef] [PubMed]
  55. Jabeen, K.; Khan, M.A.; Alhaisoni, M.; Tariq, U.; Zhang, Y.-D.; Hamza, A.; Mickus, A.; Damaševičius, R. Breast cancer classification from ultrasound images using probability-based optimal deep learning feature fusion. Sensors 2022, 22, 807. [Google Scholar] [CrossRef] [PubMed]
  56. Available online: https://www.kaggle.com/c/histopathologic-cancer-detection/data (accessed on 10 March 2022).
  57. Ramadan, S.Z. Using convolutional neural network with cheat sheet and data augmentation to detect breast cancer in mammograms. Comput. Math. Methods Med. 2020, 2020, 2020. [Google Scholar] [CrossRef]
  58. Mehmood, M.; Ayub, E.; Ahmad, F.; Alruwaili, M.; Alrowaili, Z.A.; Alanazi, S.; Rizwan, M.H.M.; Naseem, S.; Alyas, T. Machine learning enabled early detection of breast cancer by structural analysis of mammograms. Comput. Mater. Contin. 2021, 67, 641–657. [Google Scholar] [CrossRef]
  59. Isfahani, Z.N.; Jannat-Dastjerdi, I.; Eskandari, F.; Ghoushchi, S.J.; Pourasad, Y. Presentation of Novel Hybrid Algorithm for Detection and Classification of Breast Cancer Using Growth Region Method and Probabilistic Neural Network. Comput. Intell. Neurosci. 2021, 2021, 1–14. [Google Scholar] [CrossRef]
  60. Addeh, J.; Ata, E. Breast cancer recognition using a novel hybrid intelligent method. J. Med. Signals Sens. 2012, 2, 95. [Google Scholar]
  61. Waddell, M.; Page, D.; Shaughnessy, J., Jr. Predicting cancer susceptibility from single-nucleotide polymorphism data: A case study in multiple myeloma. In Proceedings of the 5th International Workshop on Bioinformatics, Chicago, IL, USA, 21 August 2005; pp. 21–28. [Google Scholar] [CrossRef]
  62. Kourou, K.; Exarchos, T.P.; Exarchos, K.P.; Karamouzis, M.V.; Fotiadis, D.I. Machine learning applications in cancer prognosis and prediction. Comput. Struct. Biotechnol. J. 2015, 13, 8–17. [Google Scholar] [CrossRef] [Green Version]
  63. Kanavati, F.; Ichihara, S.; Tsuneki, M. A deep learning model for breast ductal carcinoma in situ classification in whole slide images. Virchows Arch. 2022, 480, 1009–1022. [Google Scholar] [CrossRef]
  64. Gupta, I.; Nayak, S.R.; Gupta, S.; Singh, S.; Verma, K.; Gupta, A.; Prakash, D. A deep learning based approach to detect IDC in histopathology images. Multimed. Tools Appl. 2022, 1–22. [Google Scholar] [CrossRef]
  65. Snigdha, V.; Nair, L.S. Hybrid Feature-Based Invasive Ductal Carcinoma Classification in Breast Histopathology Images. Machine Learning and Autonomous Systems; Springer: Singapore, 2022; pp. 515–525. [Google Scholar]
Figure 1. The framework of predicting BC-IDC (+,−) tissues detection.
Figure 1. The framework of predicting BC-IDC (+,−) tissues detection.
Electronics 11 02767 g001
Figure 2. The class labeling of IDC (+,−) tissues of the BC datasets.
Figure 2. The class labeling of IDC (+,−) tissues of the BC datasets.
Electronics 11 02767 g002
Figure 3. BC-IDC (+,−) class distribution, (a) unbalanced oversampling (b) balanced oversampling.
Figure 3. BC-IDC (+,−) class distribution, (a) unbalanced oversampling (b) balanced oversampling.
Electronics 11 02767 g003
Figure 4. A random sampling of the BC-IDC (+,−) breast pathology images data.
Figure 4. A random sampling of the BC-IDC (+,−) breast pathology images data.
Electronics 11 02767 g004
Figure 5. The main architecture of the CNN model.
Figure 5. The main architecture of the CNN model.
Electronics 11 02767 g005
Figure 6. The basic structure diagram of the GRU model.
Figure 6. The basic structure diagram of the GRU model.
Electronics 11 02767 g006
Figure 7. The basic architecture of the proposed CNN-GRU model.
Figure 7. The basic architecture of the proposed CNN-GRU model.
Electronics 11 02767 g007
Figure 8. Flowchart of the proposed CNN-GRU model.
Figure 8. Flowchart of the proposed CNN-GRU model.
Electronics 11 02767 g008
Figure 9. The comparative study of the CNN-GRU with the hybrid model (CNN-LSTM, CNN-BiLSTM) for binary BC-IDC (+,−) classification.
Figure 9. The comparative study of the CNN-GRU with the hybrid model (CNN-LSTM, CNN-BiLSTM) for binary BC-IDC (+,−) classification.
Electronics 11 02767 g009
Figure 10. The confusion metric of the CNN-GRU with CNN-LSTM and CNN-BiLSTM for binary BC-IDC (+,−) image classification.
Figure 10. The confusion metric of the CNN-GRU with CNN-LSTM and CNN-BiLSTM for binary BC-IDC (+,−) image classification.
Electronics 11 02767 g010
Figure 11. The ROC curve analysis of the CNN-GRU along with CNN-LSTM, and CNN-BiLSTM of BC-IDC (+,−) detection.
Figure 11. The ROC curve analysis of the CNN-GRU along with CNN-LSTM, and CNN-BiLSTM of BC-IDC (+,−) detection.
Electronics 11 02767 g011
Figure 12. Analysis of FNR, FDR, FPR, and FOR.
Figure 12. Analysis of FNR, FDR, FPR, and FOR.
Electronics 11 02767 g012
Figure 13. TPR, TNR, and MCC.
Figure 13. TPR, TNR, and MCC.
Electronics 11 02767 g013
Figure 14. The detection time (ms) of the CNN-GRU, CNN-BiLSTM and CNN-LSTM of BC-IDC (+,−) classification.
Figure 14. The detection time (ms) of the CNN-GRU, CNN-BiLSTM and CNN-LSTM of BC-IDC (+,−) classification.
Electronics 11 02767 g014
Figure 15. Presents the comparative results of the CNN-GRU with current ML/DL models in binary BC-IDC (+,−) classification.
Figure 15. Presents the comparative results of the CNN-GRU with current ML/DL models in binary BC-IDC (+,−) classification.
Electronics 11 02767 g015
Table 1. Comprehensive overview of the existing literature using the DL model in BC detection.
Table 1. Comprehensive overview of the existing literature using the DL model in BC detection.
ReferencesDatasetModelAchievement
[49]KaggleCNN, LSTMCNN achieved higher accuracy (81%) and sensitivity (78.5%) than LSTM for the binary classification tasks,
[50]BreakHisCNN, DCNNCNN has the best accuracy than DCNN, achieving 80% accuracy.
[51]MIASCNNThe proposed model has a high accuracy of 70.9% for binary classification.
[52]BCW (Breast Cancer
Wisconsin)
DNNObtained an accuracy of 79.01%.
[53]KaggleVGG-16, CNN Achieved 80% Accuracy, Sens 79.9%, and Spec 78%.
[54]UCI-cancerR.N.N., GRU.Proposed approaches performed better in the three toys problem and have 78.90% accuracy.
[55]BCW (Breast Cancer
Wisconsin)
CNNObtained 73% accuracy compared to four cancer classifications and 70.50% for distinguishing two mixed groupings of classes.
Table 2. A comprehensive summary of parameters used for the proposed model.
Table 2. A comprehensive summary of parameters used for the proposed model.
Proposed LayersStridePaddingKernel_SizeInput DataAct_FuncionOutput
Con2D_Layer_1S = 1P = Same3 × 3(50,50,3)Relu_Func(50,50,128)
Max_pooling_1S = 1P = Same2 × 2(48,48,128)-----(48,48,128)
Drop_out = 0.3-----------------(48,48,128)-----(48,48,128)
Con2D_Layer_2S = 1P = Same3 × 3(48,48,128)Relu_Func(46,46,256)
Max_pooling_2S = 1P = Same2 × 2(46,46,256)----(44,44,256)
Drop_out = 0.9---------------(44,44,256)-----(44,44,256)
Con2D_Layer_3S = 1P = Same3 × 3(44,44,256)Relu_Func(42,42,256)
Max_pooling_3S = 1P = Same2 × 2(42,42,256)-----(41,41,256)
Dropout = 0.5---------------(41,41,256)----(41,41,256)
Con2D_Layer_4S = 1P = Same3 × 3(41,41,256)Relu_Func(39,39,256)
Dropout = 0.9-----------------(39,39,256)-----(39,39,256)
Flatten---------------(32,32,512)----(524,288)
Dense1--------------(524,288)-----(1024)
Drop_out = 0.3----------------(1024)---(1024)
Dense2---------------(1024)---(2000)
GRU----------- None,512 ---------------
Dense3------- ------(2000)-----(2000)
Table 3. The proposed model experimental setup.
Table 3. The proposed model experimental setup.
RAM8 GB.
CPU2.80 GHz processor, Core-i7, 7th Gen
GPUNvidia, 1060, 8 GB
LanguagesVersion 3.8 Python
OS64-bit Window
LibrariesScikitlearn, NumPy, Pandas, Koras, Tensor Flow
Table 4. Comparative results of the CNN-GRU with recent ML/DL model of BC-IDC (+,−) tissue classification.
Table 4. Comparative results of the CNN-GRU with recent ML/DL model of BC-IDC (+,−) tissue classification.
PublicationCancer TypeModelsDatasetAcc (%)Sens (%)Spec (%)F1-Score (%)
Proposed ModelBCCNN-GRUKaggle86.21%85%84.60%86%
[58]Breast cancerDCNNsBreakHis80%79.90%79%79%
[59]IDC (+,−)CNNKaggle75.70%74.50%74%76%
[60]Breast cancerFCM-GABreast cancer Wisconsin (BCW)76%75.50%75.10%78%
[61]Breast cancerSVMKaggle65%64.90%63.50%66%
[62]Colon carcinomatosisBNKaggle78%76.40%75%80%
[63]BC-IDC (+,−)DCNNsBreakHis80%78.90%78%82%
[64]BC-IDC (+,−)CNN, SVMBreast cancer Wisconsin (BCW)76%75.20%73.80%78.80%
[65]BC-IDC (+,−)MLKaggle70%68%67.50%72.80%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Ahmad, I.; Javeed, D.; Zaidi, S.A.; Alotaibi, F.M.; Ghoneim, M.E.; Daradkeh, Y.I.; Asghar, J.; Eldin, E.T. Intelligent Hybrid Deep Learning Model for Breast Cancer Detection. Electronics 2022, 11, 2767. https://doi.org/10.3390/electronics11172767

AMA Style

Wang X, Ahmad I, Javeed D, Zaidi SA, Alotaibi FM, Ghoneim ME, Daradkeh YI, Asghar J, Eldin ET. Intelligent Hybrid Deep Learning Model for Breast Cancer Detection. Electronics. 2022; 11(17):2767. https://doi.org/10.3390/electronics11172767

Chicago/Turabian Style

Wang, Xiaomei, Ijaz Ahmad, Danish Javeed, Syeda Armana Zaidi, Fahad M. Alotaibi, Mohamed E. Ghoneim, Yousef Ibrahim Daradkeh, Junaid Asghar, and Elsayed Tag Eldin. 2022. "Intelligent Hybrid Deep Learning Model for Breast Cancer Detection" Electronics 11, no. 17: 2767. https://doi.org/10.3390/electronics11172767

APA Style

Wang, X., Ahmad, I., Javeed, D., Zaidi, S. A., Alotaibi, F. M., Ghoneim, M. E., Daradkeh, Y. I., Asghar, J., & Eldin, E. T. (2022). Intelligent Hybrid Deep Learning Model for Breast Cancer Detection. Electronics, 11(17), 2767. https://doi.org/10.3390/electronics11172767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop