Next Article in Journal
Optimizing Social Security Contributions for Spanish Self-Employed Workers: Combining Data Preprocessing and Ensemble Models for Accurate Revenue Estimation
Previous Article in Journal
An Investigation into the Design and Analysis of the Front Frame Bumper with Dynamic Load Impact
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

A Review on Medical Image Analysis Using Deep Learning †

1
Department of Electronics and Communication Engineering, Gayatri Vidya Parishad College for Degree and PG Courses (A), Visakhapatnam 530045, India
2
Department of Electronics and Communication Engineering, Gayatri Vidya Parishad College of Engineering (A), Kommadi, Visakhapatnam 530048, India
*
Author to whom correspondence should be addressed.
Presented at the 5th International Conference on Innovative Product Design and Intelligent Manufacturing Systems (IPDIMS 2023), Rourkela, India, 6–7 December 2023.
Eng. Proc. 2024, 66(1), 7; https://doi.org/10.3390/engproc2024066007
Published: 28 June 2024

Abstract

:
The objective of the medical image analysis is to increase the effectiveness of the diagnosis options. The Coevolution Neural Network (CNN) is the predominant neural network architecture used in Deep Learning (DL) for medical image analysis. Recently, various innovative technics of DL such as different activation functions, optimization technics, and loss functions have enhanced the performance of CNNs. The Deep Learning CNN (DL-CNN) assists as valuable tool to assist radiologist in diagnosis and improves efficiency and accuracy. Numerous DL-CNN methods have been published to analyze medical images. This paper compiles the performance metrics of DL-CNN, as presented by various authors. This paper reviews the image analysis of six different diseases, viz., lung cancer, colorectal cancer, liver cancer, stomach cancer, breast cancer, and brain tumors.

1. Introduction

DL is subfield of machine learning that focuses on training artificial neural networks with multiple layers to learn and make predictions from available data set. DL models are built using artificial neural network, which are inspired by the structure and functioning of the human brain. These networks consist of interconnected neurons organized into layers. The connections between neurons have associated weights that are adjusted during the training process to optimize network performance. DL systems have been shown to be effective for analyzing medical images [1]. Some familiar image modalities used to detect the abnormalities in human organs are MRI (Magnetic Resonance Imaging) [2], X-ray, CT (Computed Tomography) scan [3], Ultrasound, and Mammography [4]. Enhancing diagnostic capacity is the goal of medical image processing [5]. Image enhancement, segmentation, feature extraction, and classification are the main components of medical image analysis [3,6,7,8]. This paper reviews the image analyses of six different diseases, viz., lung cancer, colorectal cancer, liver cancer, stomach cancer, breast cancer, and brain tumors. In medical image analysis, the data set is collected from authorized websites or real-time data from reputed medical institutes. A total of 70 percent of the data set is used for training and 30 percent of the data set is used for testing the designed model. The designed model could be based on machine learning (ML) or DL algorithms. The performance of the DL algorithm is superior when compared to ML if a very large data set is available. In order to enhance the performance of the DL further, preprocessing of the images is conducted before they are subjected to the DL network, as discussed in Section 2.

2. Architecture of DL-CNN

The architecture of the DL-CNN is portrayed as follows:
Convolution layers: The DL-CNN consists of several convolutional layers. In this layer, convolution operations are performed on the input images with various filter kernels.
Activation function: after each convolutional layer, Rectified Linear Unit (ReLU), an activation function is applied.
Pooling layers: These layers are used to reduce the spatial dimensions of the feature maps while retaining the important information. The common pooling operation is max. pooling, which selects the maximum value within a pooling region and discards the rest.
Fully connected layers: integrate all information from previous layer, crucial for learning complex patterns and making decisions in neural networks.
Output layer: this layer represents the final layer of the network, providing the predicted probabilities for each class.

3. Review on the Literature of DL-CNN on Medical Image Analysis

In this review paper, the performance of the DL-CNN, as published by various authors, is investigated. The performance of the DL-CNN is presented in Table 1 for various abnormalities.

4. Conclusions

This review paper compares the performance of various DL-CNNs on life-threatening cancers. Significant accuracy was achieved in all papers as the proposed DL-CNN has been trained on a very large data set. The authors used various types of kernels and activation functions to enhance image quality and extract features. This review helps researchers by providing a benchmark to develop DL algorithms with more accuracy using the most efficient programming languages.

Author Contributions

Conceptualization; methodology; formal analysis; investigation; data curation, writing—original draft preparation, R.E.; writing—review and editing; visualization; supervision, M.V.S.S. All authors have read and agreed to the published version of the manuscript.

Funding

The research received no external funding.

Institutional Review Board Statement

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, Y.; Zhang, S.; Al-Rfou, R.; Alain, G.; Almahairi, A.; Angermueller, C.; Bahdanau, D.; Ballas, N.; Bastien, F.; Bayer, J.; et al. Theano: A Python framework for fast computation of mathematical expressions. arXiv 2016, arXiv:1605.02688. [Google Scholar] [CrossRef]
  2. Kurihara, Y.; Matsuoka, S.; Yamashiro, T.; Fujikawa, A.; Matsushita, S.; Yagihashi, K.; Nakajima, Y. MRI of pulmonary nodules. AJR Am. J. Roentgenol. 2014, 202, W210–W216. [Google Scholar] [CrossRef] [PubMed]
  3. Iftikhar, S.; Fatima, K.; Rehman, A.; Almazyad, A.; Saba, T. An evolution based hybrid approach for heart diseases classification and associated risk factors identification. Biomed. Res. 2017, 28, 3451–3455. [Google Scholar]
  4. Data Collection, Sharing was Supported by the National Cancer Institute Funded Breast Cancer Surveillance Consortium (HHSN261201100031C), Digital Memography Dataset. Available online: http://www.bcsc-research.org/ (accessed on 12 September 2023).
  5. Ritter, F.; Boskamp, T.; Homeyer, A.; Laue, H.; Schwier, M.; Link, F.; Peitgen, H.-O. Medical image analysis. IEEE Pulse 2011, 2, 60–70. [Google Scholar] [CrossRef]
  6. Ker, J.; Wang, L.; Rao, J.; Lim, T. Deep learning applications in medical image analysis. IEEE Access 2017, 6, 9375–9389. [Google Scholar] [CrossRef]
  7. Pradhan, A.; Deepak, B.B.V.L. Obtaining hand gesture parameters using image processing. In Proceedings of the 2015 International Conference on Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), Chennai, India, 6–8 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 168–170. [Google Scholar]
  8. Rout, A.; Deepak, B.B.V.L.; Biswal, B.B.; Mahanta, G.B.; Gunji, B.M. An optimal image processing method for simultaneous detection of weld seam position and weld gap in robotic arc welding. Int. J. Manuf. Mater. Mech. Eng. (IJMMME) 2018, 8, 37–53. [Google Scholar] [CrossRef]
  9. Zheng, S.; Guo, J.; Langendijk, J.A.; Both, S.; Veldhuis, R.N.J.; Oudkerk, M.; van Ooijen, P.M.A.; Wijsman, R.; Sijtsema, N.M. Survival prediction for stage I-IIIA non-small cell lung cancer using deep learning. Radiother. Oncol. 2023, 180, 109483. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Mikhael, P.G.; Takigami, A.K.; Bourgouin, P.P.; Mrah, S.; Amayri, W.; Wohlwend, J.; Yala, A.; Karstens, L.; Barzilay, R.; et al. Sybil: A validated deep learning model to predict future lung cancer risk from a single low-dose chest computed tomography. J. Clin. Oncol. 2023, 41, 2191–2200. [Google Scholar] [CrossRef]
  11. Pandit, B.R.; Alsadoon, A.; Prasad, P.W.C.; Al Aloussi, S.; Rashid, T.A.; Alsadoon, O.H.; Jerew, O.D. Deep learning neural network for lung cancer classification: Enhanced optimization function. Multimed. Tools Appl. 2023, 82, 6605–6624. [Google Scholar] [CrossRef]
  12. Cheng, M.; Lin, R.; Bai, N.; Zhang, Y.; Wang, H.; Guo, M.; Duan, X.; Zheng, J.; Qiu, Z.; Zhao, Y. Deep learning for predicting the risk of immune checkpoint inhibitor-related pneumonitis in lung cancer. Clin. Radiol. 2023, 78, 102925. [Google Scholar] [CrossRef]
  13. Ito, N.; Kawahira, H.; Nakashima, H.; Uesato, M.; Miyauchi, H.; Matsubara, H. Endoscopic diagnostic support system for cT1b colorectal cancer using deep learning. Oncology 2018, 96, 44–50. [Google Scholar] [CrossRef] [PubMed]
  14. Wang, Y.; Zhang, X.; Zhang, X.; Wang, J. Deep learning-based colorectal cancer detection and segmentation from digitized H&E-stained histology slides. Diagn. Pathol. 2019, 14, 13. [Google Scholar]
  15. Chen, Y.; Zhang, X.; Wang, J.; Wang, W. Deep learning for colorectal cancer detection from whole slide images. Diagn. Pathol. 2022, 17, 12. [Google Scholar]
  16. Liu, L.; Zhang, X.; Wang, J.; Wang, W. Deep learning for colorectal cancer detection from histopathology images. Diagn. Pathol. 2021, 16, 10. [Google Scholar]
  17. Zhang, X.; Wang, W.; Zhang, Y.; Wang, J.; Chen, X. A deep learning model for colorectal cancer detection using endoscopic images. Diagn. Pathol. 2021, 16, 11. [Google Scholar]
  18. Ho, C.; Ho, Y.W.; Tan, C.T.; Zhao, Z.; Chen, X.F.; Sauer, J.; Saraf, S.A.; Jialdasani, R.; Taghipour, K.; Sathe, A.; et al. A promising deep learning assistive algorithm for histopathological screening of colorectal cancer. Nat. Sci. Rep. 2022, 12, 1587. [Google Scholar] [CrossRef] [PubMed]
  19. Al Duhayyim, M.; Mengash, H.A.; Marzouk, R.; Nour, M.K.; Mahgoub, H.; Althukair, F.; Mohamed, A. Hybrid Rider Optimization with Deep Learning Driven Biomedical Liver Cancer Detection and Classification. Comput. Intell Neurosci. 2022, 2022, 6162445. [Google Scholar] [CrossRef]
  20. Zhen, S.; Cheng, M.; Tao, Y.-B.; Wang, Y.-F.; Juengpanich, S.; Jiang, Z.-Y.; Jiang, Y.-K.; Yan, Y.-Y.; Lu, W.; Cai, X.-J.; et al. Deep learning for accurate diagnosis of liver tumor based on magnetic resonance imaging and clinical data. Front. Oncol. 2020, 10, 680. [Google Scholar] [CrossRef] [PubMed]
  21. Feng, X.; Cai, W.; Zheng, R.; Tang, L.; Zhou, J.; Wang, H.; Liao, J.; Luo, B.; Cheng, W.; Wei, A.; et al. Diagnosis of hepatocellular carcinoma using deep network with multi-view enhanced patterns mined in contrast-enhanced ultrasound data. Eng. Appl. Artif. Intell. 2023, 118, 105635. [Google Scholar] [CrossRef]
  22. Guo, Q.; Yu, W.; Song, S.; Wang, W.; Xie, Y.; Huang, L.; Wang, J.; Jia, Y.; Wang, S. Pathological Detection of Micro and Fuzzy Gastric Cancer Cells Based on Deep Learning. Comput. Math. Methods Med. 2023, 2023, 5147399. [Google Scholar] [CrossRef]
  23. Zhu, X.; Ma, Y.; Guo, D.; Men, J.; Xue, C.; Cao, X.; Zhang, Z. A Framework to Predict Gastric Cancer Based on Tongue Features and Deep Learning. Micromachines 2022, 14, 53. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  24. Teramoto, A.; Shibata, T.; Yamada, H.; Hirooka, Y.; Saito, K.; Fujita, H. Detection and Characterization of Gastric Cancer Using Cascade Deep Learning Model in Endoscopic Images. Diagnostics 2022, 12, 1996. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  25. Khan, S.; Islam, N.; Jan, Z.; Din, I.U.; Rodrigues, J.J.P.C. A novel deep learning based framework for the detection and classification of breast cancer using transfer learning. Pattern Recognit. Lett. 2019, 125, 1–6. [Google Scholar] [CrossRef]
  26. Sánchez-Cauce, R.; Pérez-Martín, J.; Luque, M. Multi-input convolutional neural network for breast cancer detection using thermal images and clinical data, Comput. Methods Programs Biomed. 2021, 204, 106045. [Google Scholar] [CrossRef]
  27. Ragab, M.; Albukhari, A.; Alyami, J.; Mansour, R.F. Ensemble deep-learning enabled clinical decision support system for breast cancer diagnosis and classification on ultrasound images. Biology 2022, 11, 439. [Google Scholar] [CrossRef] [PubMed]
  28. Kavitha, T.; Mathai, P.P.; Karthikeyan, C.; Ashok, M.; Kohar, R.; Avanija, J.; Neelakandan, S. Deep learning based capsule neural network model for breast cancer diagnosis using mammogram images, Interdiscip. Sci. Comput. Life Sci. 2022, 14, 113–129. [Google Scholar] [CrossRef]
  29. Wang, X.; Zhang, L.; Zhang, X.; Wang, J. A novel procedure for mammogram diagnosis based on a single-image feature. Diagn. Pathol. 2020, 15, 12. [Google Scholar]
  30. Ahmmed, R.; Swkshar, A.S.; Hossain, F.; Rafiq, A. Classification of Tumors and It Stages in Brain MRI Using Support Vector Machine and Artificial Neural Network. In Proceedings of the International Conference on Eletrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 16–18 February 2017. [Google Scholar]
  31. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imag. 2014, 33, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
  32. Khairandish, M.; Sharma, M.; Jain, V.; Chatterjee, J.; Jhanjhi, N. A hybrid CNN-svm threshold segmentation approach for tumor detection and classification of MRI brain images. IRBM 2021, 43, 290–299. [Google Scholar] [CrossRef]
  33. Hashemzehi, R.; Mahdavi, S.J.S.; Kheirabadi, M.; Kamel, S.R. Detection of brain tumors from MRI images base on deep learning using hybrid model CNN and NADE. Biocybern. Biomed. Eng. 2020, 40, 1225–1232. [Google Scholar] [CrossRef]
  34. Badža, M.M.; Barjaktarović, M.Č. Classification of brain tumors from MRI images using a convolutional neural network. Appl. Sci. 2020, 10, 1999. [Google Scholar] [CrossRef]
  35. Mehrotra, R.; Ansari, M.; Agrawal, R.; Anand, R. A transfer learning approach for AI-based classification of brain tumors. Mach. Learn. Appl. 2020, 2, 100003. [Google Scholar] [CrossRef]
Table 1. Various abnormalities and corresponding techniques.
Table 1. Various abnormalities and corresponding techniques.
Reference DiseaseModalityPerformance
[9,10,11,12]Lung CancerCT scanAUC—0.64 to 0.93
Accuracy—75–99.5%
[13,14,15,16,17,18]Colorectal CancerEndoscopy [13,17];
Whole slide images (WSIs) [14,18]; slide images [15]; histopathology images [16]
AUC—0.96 to 0.97
Accuracy—81.2–95%
[19,20,21]Liver CancerCT scan [19]; MRI [20]; CEUS data [21]Accuracy 89% to 94.5%
[22,23,24]Stomach cancer Pathological images [22]; tongue images [23];
endoscopic images [24]
Accuracy—61.1–99.6%
[25,26,27,28,29]Breast cancer Infrared images [25,26,28];
mammogram images [27,29]
Accuracy—90% to 97.53%
[30,31,32,33,34,35]Brain tumor MRIAccuracy is 92.3% to 97.5%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Egala, R.; Sairam, M.V.S. A Review on Medical Image Analysis Using Deep Learning. Eng. Proc. 2024, 66, 7. https://doi.org/10.3390/engproc2024066007

AMA Style

Egala R, Sairam MVS. A Review on Medical Image Analysis Using Deep Learning. Engineering Proceedings. 2024; 66(1):7. https://doi.org/10.3390/engproc2024066007

Chicago/Turabian Style

Egala, Raju, and M. V. S. Sairam. 2024. "A Review on Medical Image Analysis Using Deep Learning" Engineering Proceedings 66, no. 1: 7. https://doi.org/10.3390/engproc2024066007

Article Metrics

Back to TopTop