1. Introduction
Despite substantial technological advancements, cancer detection at an initial stage remains a challenging one, and present cancer detection methods are time-consuming, expensive, complex, and uncomfortable [
1]. Growths in organic electronic materials with optical imaging modalities, improvised models of different optical properties, optical biosensors, and biocompatibility are auspicious approaches for initial cancer detection and that of other diseases. Latest imaging techniques are compiled with the progression of ultrasound, positron emission tomography (PET), computed tomography (CT), and magnetic resonance imaging (MRI) for enhanced diagnosis of cancer and treatment which will help screen patients more accurately [
2]. Developments in optical biosensors or organic electronics help in the segregation between healthy and cancer cells and the remodeling point-of-care gadgets can be used in cancer diagnosis [
3]. Presently, utilizing ultrasound approaches for cancer detection depends on the experience of the clinician, particularly for the measurements and marks of cancers. To be specific, a clinician generally makes use of ultrasound instruments for cancer recognition by identifying a good angle for a clear vision of cancer displayed on the screen, and after continuing investigation fixing this for a longer period by utilizing one hand, with the other hand marking and measuring the cancers on the screen [
4,
5]. It becomes a tough task, because slight shaking of the hand which holds the probe may have a significant influence on the quality of breast ultrasound imagery; depending on this, computer-aided automated detection technologies are in high demand to locate regions of interest (ROIs), i.e., cancers, in breast ultrasound images [
6].
Many researchers have involved computer-aided diagnosis (CAD) methods for breast cancer detection; such methods invoke the usage of Bayesian networks, artificial neural networks (ANN), k-means clustering, decision trees (DT), and fuzzy logic (FL) [
7,
8]. However, some of the authors have applied CAD techniques with DOT for diagnosing breast cancer. In recent times, convolutional neural networks (CNNs) have proved to be effective in distinguishing between malignant and benign breast lesions [
9]. In comparison with classical techniques, CNNs eliminate the stages involved in feature extracting of an image; on the other hand, they provide images straight to the network which could automatically study discriminatory features [
10]. CNN architecture was specifically adapted to have the benefit of 2D structures for input images.
This article introduces an Aquila Optimizer with Bayesian Neural Network for Breast Cancer Detection (AOBNN-BDNN) model on BUI. The presented AOBNN-BDNN model follows a series of processes to detect and classify breast cancer on BUI. To accomplish this, the AOBNN-BDNN model initially employs Wiener filtering (WF) related noise removal and U-Net segmentation as a pre-processing step. Besides, the SqueezeNet model derives a collection of feature vectors from the pre-processed image. Next, the BNN method was utilized for allocating suitable class labels to the input images. Finally, the AO technique can be employed to fine-tune the parameters relevant to the BNN method so that the classification performance will be enhanced, showing the novelty of our work. To validate the enriched performance of the AOBNN-BDNN algorithm, a wide experimental study is executed on a benchmark dataset.
The rest of the paper is organized as follows.
Section 2 offers a brief survey of breast cancer classification using ultrasound images.
Section 3 elaborates on the proposed model and
Section 4 validates the performance of the proposed model. Lastly,
Section 5 concludes the study.
4. Results and Discussion
The experimental validation of the AOBNN-BDNN method is tested using the breast ultrasound image dataset [
26]. It contains a total of 780 images with three class labels, as shown in
Table 1.
Figure 3 exhibits the confusion matrices created by the AOBNN-BDNN algorithm on the applied data. On the entire dataset, the AOBNN-BDNN method has recognized 429 samples as benign, 210 samples as malignant, and 132 samples as normal. Simultaneously, on 70% of training (TR) data, the AOBNN-BDNN technique has recognized 311 samples as benign, 135 samples as malignant, and 92 samples as normal. Concurrently, on 30% of testing (TS) data, the AOBNN-BDNN method has recognized 118 samples as benign, 75 samples as malignant, and 40 samples as normal.
Table 2 and
Figure 4 portray a brief classification outcome of the AOBNN-BDNN method on the entire dataset. The results implied that the AOBNN-BDNN method has effectually recognized all three classes. For instance, the AOBNN-BDNN model has identified benign class samples with
of 98.85%,
of 99.77%,
of 98.17%,
of 98.96%, and MCC of 97.68%. Moreover, the AOBNN-BDNN algorithm has identified Malignant class samples with
of 98.97%,
of 96.33%,
of 100%,
of 98.13%, and MCC of 97.46%. Furthermore, the AOBNN-BDNN technique has identified normal class samples with
of 99.87%,
of 100%,
of 99.25%,
of 99.62%, and MCC of 99.55%.
Table 3 and
Figure 5 display a detailed classification outcome of the AOBNN-BDNN algorithm on 70% of TR data. The results denoted the AOBNN-BDNN methodology has effectually recognized all three classes. For example, the AOBNN-BDNN technique has identified benign class samples with
of 98.53%,
of 99.68%,
of 97.80%,
of 98.73%, and MCC of 97.02%. Additionally, the AOBNN-BDNN approach has identified Malignant class samples with
of 98.72%,
of 95.07%,
of 100%,
of 97.47%, and MCC of 96.67%. Besides, the AOBNN-BDNN technique has identified normal class samples with
of 99.82%,
of 100%,
of 98.92%,
of 99.46%, and MCC of 99.35%.
Table 4 and
Figure 6 exhibit brief classification results of the AOBNN-BDNN technique on 30% of the TS dataset. The outcomes implied that the AOBNN-BDNN approach has effectually recognized all three classes. For example, the AOBNN-BDNN methodology has identified benign class samples with
of 99.57%,
of 100%,
of 99.16%,
of 99.58%, and MCC of 99.15%. along with that, the AOBNN-BDNN algorithm has identified Malignant class samples with
of 99.57%,
of 98.68%,
of 100%,
of 99.34%, and MCC of 99.03%. In addition, the AOBNN-BDNN methodology has identified normal class samples with
of 100%,
of 100%,
of 100%,
of 100%, and MCC of 100%.
The training accuracy (TA) and validation accuracy (VA) obtained by the AOBNN-BDNN methodology on the test dataset is established in
Figure 7. The experimental result denoted the AOBNN-BDNN algorithm has reached higher values of TA and VA. In Particular, the VA is greater than TA.
The training loss (TL) and validation loss (VL) obtained by the AOBNN-BDNN approach on the test dataset are displayed in
Figure 8. The experimental outcome represented that the AOBNN-BDNN method has exhibited minimal values of TL and VL. To be specific, the VL is lesser than the TL.
A clear precision-recall analysis of the AOBNN-BDNN technique on the test dataset is represented in
Figure 9. The figure inferred the AOBNN-BDNN technique has resulted in enhanced values of precision-recall values in every class label.
A brief ROC study of the AOBNN-BDNN method on the test dataset is shown in
Figure 10. The outcomes signified that the AOBNN-BDNN methodology has displayed its capability in classifying distinct classes on the test dataset.
Table 5 provides an overall comparative inspection of the AOBNN-BDNN method with recent approaches [
11].
Figure 11 renders a brief study of the AOBNN-BDNN method with existing models in terms of
and
. The experimental outcomes reported the betterment of the AOBNN-BDNN model. With respect to
, the AOBNN-BDNN model has attained increased
of 99.56% whereas the SEODTL-BDC, ESD, LSVM, ESKNN, FKNN, and LD methods have obtained reduced
of 99.18%, 98.89%, 98.41%, 97.55%, 96.93%, and 97.41% respectively. In addition, with regard to
, the AOBNN-BDNN method has achieved increased
of 99.72% whereas the SEODTL-BDC, ESD, LSVM, ESKNN, FKNN, and LD algorithms have gained reduced
of 98.18%, 98.11%, 97.87%, 98.27%, 97%, and 98.05% correspondingly.
Figure 12 offers a comparative examination of the AOBNN-BDNN method with existing models with
and
. The experimental outcomes reported the betterment of the AOBNN-BDNN method. With
, the AOBNN-BDNN algorithm has gained increased
of 99.72% whereas the SEODTL-BDC, ESD, LSVM, ESKNN, FKNN, and LD methodologies have reached reduced
of 99.12%, 99.07%, 98.73%, 97.70%, 97.42%, and 97.92%, correspondingly. Additionally, concerning
, the AOBNN-BDNN method has received increased
of 99.64% whereas the SEODTL-BDC, ESD, LSVM, ESKNN, FKNN, and LD methodologies have gained reduced
of 98.14%, 98.88%, 98.29%, 98.61%, 96.93%, and 96.99%, correspondingly.
Finally, a classification time (CT) inspection of the AOBNN-BDNN model with recent models is carried out in
Table 6 and
Figure 13. The attained values implied the ESKNN method has shown poor results with a higher CT of 2.28 min. In the meantime, the ESD and FKNN methods have obtained slightly reduced outcomes with closer CT of 2.28 min and 2.24 min, respectively.
In addition, the LSVM and LD models have accomplished moderately reduced CT of 1.95 min and 1.75 min, respectively. However, the AOBNN-BDNN model has gained maximum outcome with minimal CT of 1.01 min. Therefore, the experimental values guaranteed that the AOBNN-BDNN method was found to be an effective tool compared to other approaches.