Next Article in Journal
Performance Enhancement of Aged Mineral Oil by Blending Synthetic Ester for Transformer Insulation Applications
Previous Article in Journal
Hydrothermal Synthesis of Mesoporous FeTiO3 for Photo-Fenton Degradation of Organic Pollutants and Fluoride Adsorption
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Histopathological Image Analysis Using Deep Learning Framework †

by
Sudha Rani Vupulluri
* and
Jogendra Kumar Munagala
Department of CSE, Koneru Lakshmaiah Education Foundation, Guntur 522302, India
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances on Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
Eng. Proc. 2023, 59(1), 132; https://doi.org/10.3390/engproc2023059132
Published: 29 December 2023
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)

Abstract

:
Breast cancer has the highest mortality rate. Therefore, histologic imaging evaluations must detect breast cancer early. Traditional methods are time-consuming and limit pathologists’ skills. Breast cancer histopathology picture segmentation is neglected by existing HIAs because of its complexity and lack of historical data with exact annotations. Histopathology breast cancer images are classified using graph-based segmentation. Graph-segmented images retrieve relevant features. Using recursive feature removal, breast cancer photographs are categorized. Breast cancer symptoms can be detected by appropriately classifying breast histopathology scans as abnormal or normal. Modern medicine diagnoses and predicts diseases, including cancer, using histopathological image analysis. Due to picture identification and feature extraction, deep learning can automate and improve histopathological image analysis. This study extensively analyses deep learning frameworks in histopathology image analysis. Starting with histopathological image interpretation’s challenges, this study emphasizes the intricate patterns, cell structures, and tissue anomalies that demand professional attention. It then examines CNNs, RNNs, and their variants’ design and ability to catch subtle features and patterns in histopathological images. We examine tumour detection, grading, segmentation, and prognosis using deep learning in histopathology. For each problem, this article evaluates cutting-edge deep learning models and approaches to demonstrate their accuracy and efficiency. While training deep learning models for histopathology image analysis, this study tackles data collection, preprocessing, and annotation. We also analyse automated clinical systems’ ethical and regulatory ramifications. Deep learning-based histopathological image processing case studies show patient care and applications. Multi-modal data fusion, transfer learning, and explainable AI may increase the accuracy and interpretability of histopathological image analyses.

1. Introduction

Computer-aided design (CAD) has become a medical research tool of paramount importance. The evaluation and analysis of medical images such as ultrasonic or X-ray images have been used to diagnose different types of cancers using CAD. Recently, researchers have focused on developing CAD systems for the early detection of endometrial cancer using hysteroscopy [1], ultrasound [2], MRI [3], and histology images [4]. A support vector machine (SVM) classification developed by Neofytou et al., for instance, showed an accuracy of 81% on a dataset with 516 regions of interest. A CART approach was created by Pouliakis et al., which can be used for the classification and regression of normal and abnormal cases among 222 histologically confirmed cases.
Recent developments in deep learning [5] and artificial intelligence have demonstrated remarkable outcomes for variety of applications, particularly in the fields of speech and medicine. Notably, they have made great progress in the accurate detection of malignancies and uncommon disorders, like pneumonia caused by viruses, diabetic retinal detachment [6], cataracts that are congenital, and skin cancers [7], and have displayed human expert-level accuracy in illness categorization. The combination of deep learning and CAD offers significant potential for the early identification of cancer as it can leverage big data in clinical imaging and can improve the efficiency of popular CAD systems further [8]. Breast cancer is a well-known and serious disease that affects both humans and animals. It is the most common kind of cancer in females around the world [9]. Mammary tumours are the most commonly identified tumours [10] in female dogs, and they are also among the most malignant [11]. The similarity between canine mammary tumours (CMTs) and human breast cancer (HBC) make CMTs useful models for studying HBC, clinicopathological features, histology, and prognostic markers [12]. However, CMTs have a greater mortality rate than HBCs, mainly due to delayed diagnoses and a lack of early diagnostic procedures [13]. Often, dog owners become aware of tumours only when they are visually noticeable, which leads to a poorer prognosis [14]. Early diagnosis is crucial for successful treatment approaches. Recent studies have explored various techniques, like multiplexed bead [15], biological sensors [16], and gene expression profiling [17], for detecting CMTs. AlexNet, a deep neural network trained to classify images of breast cancer histology, a popular convolutional neural network (ConvNet), has demonstrated [18] promising results. Deep learning-based techniques have outperformed groups of histopathologists in categorizing breast cancer images. Transfer learning has been utilized to categorize histopathology images, where a VGGNET19-based FE demonstrated some positive results [19]. An analogous method was used in earlier research to divide HBC and CMT into two groups, achieving the highest accuracy using DL-based methods [20].

Histopathology Image Analysis for Breast Cancer

The efficiency of graph-based segmentation in histopathology image processing for breast cancer addresses domain-specific problems. Complex and irregular structures in histopathology photos make it difficult to distinguish regions of interest. Pixel-level interactions in graph-based segmentation reveal important tissue structures, capturing minor texture and morphological differences. This method works because it preserves spatial data, where graph-based segmentation preserves the spatial information that conventional approaches may miss by considering the pixel layout; adapts to heterogeneity, with histopathological pictures having diverse cell densities and tissue architectures and graph-based segmentation dynamically identifies regions of interest to accommodate heterogeneity; detects boundaries, with graph-based segmentation excelling at recognizing tissue boundaries, which is crucial for separating malignant from healthy tissue; and extracts contextual characteristics that show neighbouring pixels’ relationships, giving a more complete tissue picture.

2. Proposed Method

Histopathological image analysis using deep learning frameworks has gained significant attention in recent years due to its potential in better diagnosing cancer and speeding up cancer studies. Histopathology involves the microscopic examination of tissue samples to detect abnormalities, such as cancerous cells or other diseases. Deep learning frameworks offer a powerful toolset for automating and enhancing this process. Here is an overview of how deep learning is applied to histopathological image analysis. Data Collection and Preprocessing: A large dataset of histopathological images is gathered, which may include various tissue types, staining techniques, and conditions. The images are then preprocessed by resizing, normalizing, and augmenting them to ensure consistent input to the deep learning model.
Deep Learning Models: An appropriate deep learning architecture is chosen for the task. In the field of image study, convolutional neural networks (CNNs) are frequently employed. Existing pre-trained models, like VGG, ResNet, or Inception, are fine-tuned or adapted to histopathology-specific tasks. Transfer learning can save time and resources.
Training: The data are separated into training, validation, and test sets so that the models can be trained and tested. A loss function and optimization algorithm suitable for the task are defined. The model is trained on a powerful GPU-enabled machine or cloud platform. Training may take hours or days, depending on the dataset size and model complexity.
Data Augmentation: The model’s robustness is improved by increasing the variety of training examples through data augmentation methods like rotation, flipping, zooming, and colour jittering.

2.1. Accuracy of Breast Cancer Image Classification

Graph-based segmentation to identify relevant characteristics improves breast cancer picture classification accuracy. Predefined feature extraction algorithms might not capture a tissue’s complex spatial interactions. However, graph-based segmentation uses pixel-level interactions to determine regions of interest and to extract contextually relevant data. This breakthrough allows for finer-grained breast cancer image processing, enhancing malignant and benign tissue discrimination and improving breast cancer pathology diagnostic and prognostic models.
A proposed approach for breast cancer classification must be validated using quantitative characteristics like how sensitive, detailed, and accurate something is. Using these measures, the model’s ability to correctly identify cancerous from benign cases may be evaluated. Here is how they help establish credibility:
True positive rate (sensitivity): The sensitivity of a model is evaluated by how many false positives it generates relative to a percentage of all favourable results. True positives are extremely important when diagnosing cancer, and a high sensitivity suggests that the model is successful at catching them. This results in fewer missed cases of breast cancer. To determine the sensitivity, divide the number of positive results by the total number of results (positive and negative).
True negative rate (specificity): The model’s specificity is evaluated by how well it can distinguish false positives from true positives. If the model has a high specificity, it means it can distinguish between malignant and noncancerous specimens without triggering false positives. Specificity is calculated as follows:
Specificity = (True − ve)/(True − ve + False + ve).
Precision: Taking into account both good and negative outcomes, accuracy measures how well a model can classify data. It is a measure of how well the model performs when distinguishing between benign and malignant situations. Accuracy is calculated as the ratio of correct diagnoses to total cases.
The model’s efficacy in breast cancer classification is measured across the following quantitative metrics: The risk of potentially fatal false negatives is reduced when using a model with high sensitivity for detecting true cases of breast cancer. Patients with benign illnesses will experience less stress and undergo fewer follow-up operations because of the model’s high specificity.

2.2. Inter-Column Arcs

A strict shape limitation is placed between two columns. Let us assume two adjacent columns, q(x2, y2) and p(x1, y1), are the answers. In addition, to preserve the complex-shape restriction, the edge from node ni (x1, y1, z) of p(x1, y1) to node ni (x2, y2, z) has a weight of +. n i   ( x 2 ,   y 2 ,   m a x ( z Δ p q   i L p q i of q(x2, y2). There must be no surface Si on which I(x1 − y1 − z) might exist. Add an extra node to the surface at a cost of + and a directed line between two nodes with a weight of +. Using this method, we avoid having a surface Si that is invalid (x2, y2, Z). Another directed arc with a weight then forms, this time from the node at (x2, y2, z) to the node at (z2). It has a weight that is +∞. n i ( x 1 , y 1 , m a x ( z + Δ p q i L p q i . If z + Δ p q i L p q i > Z 1 , which can be solved as the following case shown in Figure 1.
Each of the proposed method’s three CNNs have 1000 characteristics. VGG-16, AlexNet, and ResNet16 are the names of these networks. These 1000 items are utilized to create the CNN. The greedy technique was used to extract the finest highlights from each of the 1000 available. To forecast the order of diseases in the body, many classifiers are used. Figure 2 and Figure 3 represent the model that we propose to build.

2.3. Segmenting Breast Cancer Histopathology Images

Tissue heterogeneity, uneven boundaries, and the requirement to precisely delineate malignant spots make segmenting breast cancer histopathology images difficult. Such issues are solved with our method:
Tissue heterogeneity: A uniform segmentation model is challenging to construct since breast tissue is heterogeneous. We use graph-based segmentation to accurately identify tissue types by collecting pixel-level associations and adapting to tissue fluctuations.
Uneven boundaries: The challenge is that histopathology photos with uneven and complicated cancer–noncancerous borders are common. The solution is that our border-detecting method is superior. Graph-based methods clearly define these boundaries for reliable segmentation.
Accurate cancer region delining: The challenge is that prognosis and treatment depend on tumour size and extent; therefore, breast cancer diagnosis requires exact delineation. The solution is that graph-based segmentation captures tiny tissue texture differences and accurately localizes malignant areas for accurate diagnosis.

2.4. Clarification of These Differences

Graph-based segmentation: The graph segmentation and normalized cuts algorithms are designed for image segmentation. The image is a graph with pixels or areas as nodes and their relationships as edges. These methods optimize an energy function to divide the image into meaningful areas or segments by balancing similarities between pixels or regions and dissimilarities between segments. As a preprocessing step for computer vision tasks like breast cancer image analysis, graph-based segmentation segments an image into areas.
Conventional breast cancer image classification: Breast cancer image categorization using traditional machine learning and custom feature extraction is common. These approaches use Histogram of Oriented Gradients (HOG), Haralick texture features, or Gabor filters to extract image properties like texture, shape, and colour. The retrieved characteristics are used to train breast cancer image categorization using machine learning models (e.g., support vector machine and random forest).
Convolution generates the feature map for selecting various pixel sizes like 3 × 3, 5 × 5, 7 × 7. Here, l stands for layer, and the proper response is ixj in which filter x is a dimension. Equation (2) contains the layer’s response.
O i l = f b i l + k = 1 x i l 1 J i , j l O j l 1
where b i l denotes a bias matrix and J i , j l denotes the filter size and The input image has a resolution of 222 × 222 pixels. This is a concept that has 3 × 3, 5 × 5 pixel channels. The ResNet-16 components are depicted in Figure 4.

2.5. The Potential Clinical Implications

Clinically, accurate breast cancer histopathology picture classification can enhance early identification and patient outcomes. Advanced machine learning algorithms can improve cancer detection, reduce false negatives, and help personalize treatment. It improves prognoses by allowing exact tumour grading, risk stratification, and treatment monitoring. This technology optimizes resources, conducts research, and gives patients visual representations of their conditions. AI’s full benefits must be integrated into clinical workflows with human skills to ensure safety and privacy.

2.6. Histopathology Image Analysis Tasks

The graph-based segmentation and classification method used in breast cancer histopathology image analysis has several applications. It applies to histopathology image analysis tasks across cancer types and medical situations. By modifying this strategy, we can carry out the following: The same method can be used to analyse histopathology images from other organs like the lung, prostate, or colon for early cancer diagnosis and precise categorization. Disease subtyping uses the method to subtype tumours for molecularly targeted treatment. Treatment response assessments can monitor histopathological changes over time to optimise treatment efficacy in various cancer types. The approach can be improved to detect infectious agents in tissue samples to improve tuberculosis and viral infection diagnosis and treatment.
The architecture of ResNet-16 is depicted in Figure 5. Recursive feature elimination (RFE) is utilized to determine the attribute subset that performs the best. This strategy is known as ravenous-hungry optimization. In the realm of breast cancer classification, a comparative exploration of methodologies reveals a dichotomy: traditional approaches rely on manually crafted features and conventional machine learning algorithms, while deep learning, particularly from histopathological images, promptly discerns crucial traits. The analysis of histopathological images using deep learning, juxtaposed with traditional breast cancer classification methods, illuminates a stark contrast in feature extraction and representation. Traditional methods necessitate feature engineering, demanding domain expertise and consuming time, potentially overlooking vital image information. In contrast, data-driven deep learning algorithms, such as convolutional neural networks (CNNs), autonomously glean superior hierarchical characteristics from images, learning intricate histopathological patterns directly from raw pixel data.

2.7. Recursive Feature Elimination (RFE)

In machine learning, RFE is a technique for selecting characteristics to improve classification models. Recursively deleting the least important elements in a dataset, RFE rates their relevance. RFE and its impact on classification accuracy can be simply explained: In terms of feature ranking, RFE uses all dataset attributes in order to hone a classification system (such an SVM or an RF). The trained model’s significance scores rank the characteristics. Features that do not improve model performance are less relevant. In terms of eliminating features, the dataset removes low-importance features. The decreased feature set is used to retrain and evaluate the model. For process recursion, a fixed number of the least important features is iteratively deleted from steps 1 and 2 until a predefined number is attained. The process continues until the required number or optimal subset of characteristics is reached. For improvements in accuracy, RFE improves the classification accuracy in numerous ways, such as the removal of noisy or superfluous elements that decrease model performance. RFE simplifies the model by focusing on the most informative characteristics, reducing overfitting and improving generalization. RFE can also help feature engineers and model interpreters identify classification-relevant features.

2.8. How It Handles Variations and Uncertainties in Image Data

Recursive feature elimination (RFE) classification is impressively resilient to image data changes and uncertainties. RFE excels in lowering noise; image data sometimes contain noise and useless information due to illumination, image artefacts, or acquisition methods. In feature selection, RFE reduces noisy or uninformative characteristics to focus the model on the most important data. The classifier’s picture defect resistance improves with noise reduction. RFE also has stable features: quality, resolution, and angle of view can affect image data. However, variations might make classification uncertain. By continuously selecting and preserving the most discriminative features across image occurrences or dataset variances, RFE’s iterative feature selection method provides stability. Robustness lets the model operate well in many settings. RFE also counters overfitting: classification models may overfit and learn noise rather than patterns in the presence of high variability or inadequate data. By decreasing feature space dimensionality, RFE reduces overfitting. It simplifies and strengthens the model by progressively removing less significant elements, making it less susceptible to data fluctuations. RFE is also transferable: RFE-selected features are frequently more useful and generalizable across datasets or data-gathering conditions. The model’s adaptability to new data is improved by the selected features’ transferability, making it a good choice for real-world applications with variable image data.

2.9. Computer-Aided Diagnosis (CAD) Systems

Graph-based segmentation and classification improves medical imaging CAD systems. Improved accuracy: our graph-based segmentation method precisely defines medical picture zones of interest, reducing CAD system false positives and negatives. CAD system diagnostic accuracy is improved by segmenting and classifying tissue features, making medical judgements more reliable. Enhanced interpretability: interpretable border delineation and feature extraction are achievable with graph-based segmentation. Transparency helps clinicians understand and trust CAD system conclusions, improving teamwork. Adaptability to data: our method works effectively with medical imaging data from practices. Because it can manage tissue texture, structure, and picture quality variances, it is versatile in medical imaging.

3. Results and Discussions

Basically, there are two broad categories of breast cancer: benign and aggressive. There were 277,524 50 × 50 patches made (78,786 IDC-positive and 78,738 IDC-negative). Each patch has the filename uxXyYclassC.png, for example, 10253idx5x1351y1101class0.png. X, Y, and u stand for the patient ID. No-10253idx5, and x and y stand for the coordinates; 1 indicates IDC, and 0 is non-IDC. An example of a histopathological image of a breast is shown in the following diagram in Figure 6.
We have utilized 7026 images from the dataset to teach the framework, and another 2342 for evaluation. The performance parameters of the model are evaluated and depicted in the following formulae. The LCC (Left Craniocaudal), LMLO (Left Mediolateral Oblique), RCC (Right Craniocaudal) and RMLO (Right Mediolateral Oblique) are clearly shown in above Figure 6.

Evaluation of Performance Parameters

The following equations are used for an evaluation of the performance parameters:
A c c u r a c y = T P + T N ( T P + F N ) ( F P + T N )
  S e = T P T P + F N
  S p = T N T N + F P
  P r = T P T P + F P
F s c o r e = 2 T P 2 T P + F P + F N
Here, sensitivity is indicated as St, specificity is indicated as Spt, precision (Pr) is indicated as Pre, accuracy is indicated as Ac, and F-score is indicated as FS. The terms “true positive” (TP) and “true negative” (TN) refer to correctly identified samples, and “false positive” (FP) and “false negative” (FN) refer to images of cancer that were incorrectly identified as “normal”.
Table 1 presents the metrics for assessing the effectiveness of the suggested approach on the standard breast cancer dataset.
Metrics for gauging how well the planned strategy works with RFE for the basic dataset on breast cancer are shown in Table 2. The SVM classifier performs better in individual models; however, the proposed method decision tree performs well in the proposed framework.

4. Conclusions

The efficiency of deep learning is greatly influenced by how well the model fits the data, and how good the feature extraction is. In this research, we built a deep learning model to identify key features from breast cancer histopathology pictures. Overall, this study reveals the advantages of the proposed hybrid model for properly identifying cancer subtypes in histopathological pictures, which includes the decision classifier and the framework. Because of its simplicity and efficiency, this approach is suited for use in low-cost healthcare settings. Further progress in this area could lead to more effective cancer diagnosis and treatment in the near future.

Author Contributions

Conceptualization by J.K.M., methodology and validation by S.R.V. and J.K.M., writing as well as review by S.R.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data can be obtained from the corresponding author on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hussain, S.; Saxena, S.; Shrivastava, S.; Arora, R.; Singh, R.J.; Jena, S.C.; Kumar, N.; Sharma, A.K.; Sahoo, M.; Tiwari, A.K.; et al. Multiplexed autoantibody signature for serological detection of canine mammary tumours. Sci. Rep. 2018, 8, 721–734. [Google Scholar] [CrossRef] [PubMed]
  2. Jena, S.C.; Shrivastava, S.; Saxena, S.; Kumar, N.; Maiti, S.K.; Mishra, B.P.; Singh, R.K. Surface plasmon resonance immunosensor for label-free detection of Birc5 biomarker in spontaneously occurring canine mammary tumours. Sci. Rep. 2019, 9, 112–123. [Google Scholar] [CrossRef] [PubMed]
  3. Rao, M.V.; Ramya, U.; Lakshman, P.; Prabhakar, V.S.V.; Madhav, B.T.P. Triple notch slotted monopole antenna with complementary split ring resonators. Int. J. Comput. Aided Eng. Technol. 2021, 15, 458. [Google Scholar] [CrossRef]
  4. Kesana, S.; Srinivasa Babu, P.S.; Shameem, S. Design of quad-band antenna of 3.8 GHz range for Wi-Max Applications. In Proceedings of the 2021 IEEE International Conference on Mobile Networks and Wireless Communications (ICMNWC), Tumkur, Karnataka, India, 3–4 December 2021. [Google Scholar]
  5. Singh, G.; Singh, U. Triple-step feed line-based compact ultra-wideband antenna with quadruple band-notch characteristics. Int. J. Electron. 2022, 109, 271–292. [Google Scholar] [CrossRef]
  6. Rizvi Jarchavi, S.M.; Iqbal, M.; Dalarsson, M.; Alibakhshikenari, M.; Dayoub, I. Compact multi-band flexible antenna for ISM, WLAN, Wi-Fi, and 5G sub-6-ghz applications. In Proceedings of the 2022 3rd URSI Atlantic and Asia Pacific Radio Science Meeting (AT-AP-RASC), Gran Canaria, Spain, 29 May–3 June 2022. [Google Scholar]
  7. Saikumar, K.; Arulanantham, D.; Rajalakshmi, R.; Prabu, R.T.; Kumar, P.S.; Vani, K.S.; Ahammad, S.H.; Eid, M.M.; Rashed, A.N.; Hossain, M.A.; et al. Design and development of surface plasmon polariton resonance four-element triple-band multi-input multioutput systems for LTE/5G applications. Plasmonics 2023, 18, 1949–1958. [Google Scholar] [CrossRef]
  8. Vasimalla, Y.; Pradhan, H.S.; Pandya, R.J.; Saikumar, K.; Anwer, T.M.; Rashed, A.N.; Hossain, M.A. Titanium dioxide-2d nanomaterial based on the surface plasmon resonance (SPR) biosensor performance signature for infected red cells detection. Plasmonics 2023, 18, 1725–1734. [Google Scholar] [CrossRef]
  9. Nejdi, I.H.; Das, S.; Rhazi, Y.; Madhav, B.T.; Bri, S.; Aitlafkih, M. A compact planar multi-resonant multi-broadband fractal monopole antenna for Wi-Fi, WLAN, Wi-Max, Bluetooth, LTE, S, C, and X band Wireless Communication Systems. J. Circuits Syst. Comput. 2022, 31, 75–89. [Google Scholar] [CrossRef]
  10. Al-Tamimi, H.M. Design of double notch band half-elliptical shape reconfigurable antenna for UWB applications. Eng. Technol. J. 2019, 37, 85–89. [Google Scholar] [CrossRef]
  11. Yan, Y.; Li, L.; Zhang, J.; Hu, H.; Zhu, Y.; Chen, H.; Fang, Q. Design of Y-type branch broadband dual-polarization antenna and C-type slot line notch antenna. Prog. Electromagn. Res. M 2021, 106, 105–115. [Google Scholar] [CrossRef]
  12. Lanka, M.D.; Chalasani, S. Development of low profile M-shaped monopole antenna for Sub 6 GHz bluetooth, LTE, ISM, Wi-Fi and WLAN applications. Int. J. Intell. Eng. Syst. 2021, 14, 159–167. [Google Scholar]
  13. Meher, P.R.; Behera, B.R.; Mishra, S.K. A compact circularly polarized cubic DRA with unit-step feed for bluetooth/ism/wi-fi/wi-max applications. AEU-Int. J. Electron. Commun. 2021, 128, 153–167. [Google Scholar] [CrossRef]
  14. Sarma, C.A.; Inthiyaz, S.; Madhav, B.T. Design and assessment of bio-inspired antennas for Mobile Communication Systems. Int. J. Electr. Electron. Res. 2023, 11, 176–184. [Google Scholar] [CrossRef]
  15. Sandeep, D.R.; Madhav, B.T.; Das, S.; Hussain, N.; Islam, T.; Alathbah, M. Performance analysis of skin contact wearable textile antenna in human sweat environment. IEEE Access 2023, 11, 62039–62050. [Google Scholar] [CrossRef]
  16. Aghoutane, B.; Das, S.; El Faylali, H.; Madhav, B.T.; El Ghzaoui, M.; El Alami, A. Analysis, design and fabrication of a square slot loaded (SSL) millimeter-wave patch antenna array for 5G applications. J. Circuits Syst. Comput. 2020, 30, 215–227. [Google Scholar] [CrossRef]
  17. Rashmi, R.; Ramachandran, P.; Vasu, M. A low profile dual band dual C—shaped monopole antenna for Wi-Fi, WiMAX and WLAN applications. I-Manager’s J. Wirel. Commun. Netw. 2023, 11, 14–25. [Google Scholar]
  18. Eunice, J.; Popescu, D.E.; Chowdary, M.K.; Hemanth, J. Deep learning-based leaf disease detection in crops using images for agricultural applications. Agronomy 2022, 12, 2395. [Google Scholar]
  19. Dey, N.S.; Mohanty, R.; Chugh, K.L. Speech and speaker recognition system using artificial neural networks and Hidden Markov model. In Proceedings of the 2012 International Conference on Communication Systems and Network Technologies, Rajkot, India, 11–13 May 2012; pp. 311–315. [Google Scholar]
  20. Baskar, M.; Ramkumar, J.; Karthikeyan, C.; Anbarasu, V.; Balaji, A.; Arulananth, T.S. Low rate ddos mitigation using real-time Multi Threshold Traffic Monitoring System. J. Ambient. Intell. Humaniz. Comput. 2021, 1–9. [Google Scholar] [CrossRef]
Figure 1. Basic convolution neural network (CNN) model.
Figure 1. Basic convolution neural network (CNN) model.
Engproc 59 00132 g001
Figure 2. The proposed framework.
Figure 2. The proposed framework.
Engproc 59 00132 g002
Figure 3. The architecture of VGG-16 and AlexNet models: (a) VGG-16 architecture and (b) AlexNet architecture.
Figure 3. The architecture of VGG-16 and AlexNet models: (a) VGG-16 architecture and (b) AlexNet architecture.
Engproc 59 00132 g003
Figure 4. The primary entity of ResNet-16.
Figure 4. The primary entity of ResNet-16.
Engproc 59 00132 g004
Figure 5. An architecture diagram of Res Net-16.
Figure 5. An architecture diagram of Res Net-16.
Engproc 59 00132 g005
Figure 6. Sample breast cancer image dataset.
Figure 6. Sample breast cancer image dataset.
Engproc 59 00132 g006
Table 1. Measurements of the suggested method’s efficacy on a foundational breast cancer database.
Table 1. Measurements of the suggested method’s efficacy on a foundational breast cancer database.
ModelClassifiersFeaturesAcStSptPreFs
Model 1Mc—dt1000.092.3088.509090.3587.75
Mc—knn 91.4988878890
Mc—lda 90.19090.3588.6588.45
Mc—lr 89.70888788.9991
Mc—svm 94.509090.358987
Model 2Mc—dt1000.0918888.49087
Mc—knn 908785.08788.9
Mc—lda 9089898887.40
Mc—lr 888785.58890
Mc—svm 93898987.586
Model 3Mc—dt1000.08986.0187.08886
Mc—knn 8985.484.1085.388
Mc—lda 8887.4888686
Mc—lr 8785.484.086.589
Mc—svm 9288888684.0
Table 2. Measures of the suggested method’s efficacy on a baseline database devoted to breast cancer.
Table 2. Measures of the suggested method’s efficacy on a baseline database devoted to breast cancer.
Hybrid ModelClassifiersFeaturesAcStSptPreFS
Res Net 16 and VGG-16Mc—dt200.09389.3091.0091.0093.20
Mc—knn 92.4088.6088.008992.40
Mc—lda 91.6090.6091.309091.60
Mc—lr 90.6088.6088.009090.00
Mc—svm 95.4090.8091.309095.40
Res Net 16 and AlexNetMc—dt200.091.7088.0089.309091.70
Mc—knn 90.9087.5086.408890.90
Mc—lda 90.0089.0090.008990.00
Mc—lr 89.0088.0086.508989.00
Mc—svm 94.0090.0090.0088.5094.0
AlexNet +VGG 16Mc—dt200.090.0087.0088.0089.090.0
Mc—knn 89.5486.0085.0086.3090.0
Mc—lda 8988.1189.087.0089.0
Mc—lr 88.086.0085.0087.5088.0
Mc—svm 93.0088.4089.087.0093.0
Res Net 16 +VGG 16
+ AlexNet
Mc—dt300.094.4090.4092.092.6093.50
Mc—knn 93.8091.7090.0090.0093.90
Mc—lda 92.8291.8092.5091.0092.80
Mc—lr 92.0091.8091.2091.3091.70
Mc—svm 94.0093.0093.6091.9094.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vupulluri, S.R.; Munagala, J.K. Histopathological Image Analysis Using Deep Learning Framework. Eng. Proc. 2023, 59, 132. https://doi.org/10.3390/engproc2023059132

AMA Style

Vupulluri SR, Munagala JK. Histopathological Image Analysis Using Deep Learning Framework. Engineering Proceedings. 2023; 59(1):132. https://doi.org/10.3390/engproc2023059132

Chicago/Turabian Style

Vupulluri, Sudha Rani, and Jogendra Kumar Munagala. 2023. "Histopathological Image Analysis Using Deep Learning Framework" Engineering Proceedings 59, no. 1: 132. https://doi.org/10.3390/engproc2023059132

APA Style

Vupulluri, S. R., & Munagala, J. K. (2023). Histopathological Image Analysis Using Deep Learning Framework. Engineering Proceedings, 59(1), 132. https://doi.org/10.3390/engproc2023059132

Article Metrics

Back to TopTop