Next Article in Journal
Drought Risks Assessment Using Standardized Precipitation Index
Previous Article in Journal
A Methodical Review of Iridology-Based Computer-Aided Organ Status Assessment Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

A Comprehensive Review on the Application of 3D Convolutional Neural Networks in Medical Imaging †

by
Satyam Tiwari
1,
Goutam Jain
1,
Dasharathraj K. Shetty
1,*,
Manu Sudhi
2,
Jayaraj Mymbilly Balakrishnan
2 and
Shreepathy Ranga Bhatta
3
1
Department of Data Science and Computer Applications, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
2
Department of Emergency Medicine, Kasturba Medical College, Manipal, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
3
Department of Humanities and Management, Manipal Institute of Technology, Manipal Academy of Higher Education (MAHE), Manipal 576104, Karnataka, India
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances in Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
Eng. Proc. 2023, 59(1), 3; https://doi.org/10.3390/engproc2023059003
Published: 11 December 2023
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)

Abstract

:
Convolutional Neural Networks (CNNs) are kinds of deep learning models that were created primarily for processing and evaluating visual input, which makes them extremely applicable in the field of medical imaging. CNNs are particularly adept in automatically identifying complex patterns and features in pictures like X-rays, CT scans, and MRIs. They accomplish this by capturing hierarchical information utilizing layers of convolutional and pooling processes. By enabling precise disease diagnosis, anatomical structure segmentation, and even patient outcomes’ prediction, CNNs have transformed medical imaging. In this review paper, we examine how crucial CNNs are for improving diagnostic effectiveness and efficiency across a range of medical imaging applications. This review details how Convolutional Neural Networks (CNNs) are used, focusing on the development and use of 3D CNNs for processing and categorizing multidimensional and moving images. The paper discusses how critical 3D CNNs are in areas like analyzing surveillance videos and, especially, in the field of medical imaging to find pathological tissues. With this method, pathologists can segment the layers of the bladder with a lot more accuracy, which cuts down on the time they have to spend looking over them by hand. CNNs use specific filters to find spatial and temporal relationships in images, making understanding and interpreting them easier. CNNs are better at fitting image datasets because they have fewer parameters and weights that can be used more than once. This makes the network better able to understand complex images. This thorough review shows how 3D CNNs could improve the speed and accuracy of processing and analyzing medical images and how far they have already come.

1. Introduction

Artificial intelligence (AI) breakthroughs have a revolutionary impact across various industries. The healthcare sector, in particular, has experienced a significant digital revolution, with machine learning (ML) algorithms playing a vital role in analyzing complicated medical data, assisting in diagnostics, prognostics, and treatment planning [1]. Convolutional Neural Networks (CNNs) are a well-known method in machine learning (ML). CNNs were initially developed and widely utilized in computer vision for object detection and recognition [2], but have lately emerged as a powerful tool for medical picture analysis. The creation of 3D CNNs has resulted from the adaptation of CNNs to process volumetric data. Unlike their 2D counterparts, 3D CNNs may capture both spatial and temporal or depth-related information in three-dimensional data, allowing for a more thorough analysis [3]. This has been especially valuable in medical imaging, where three-dimensional pictures such as Magnetic Resonance Imaging (MRIs), Computed Tomography Scans (CT scans), and ultrasound scans are commonly used. They have been used successfully to detect and classify diseased tissues more precisely and, in less time, than standard techniques [4]. Despite their growing popularity and significance, there needs to be more thorough literature on using 3D CNNs in medical imaging. This paper addresses that hole by offering an in-depth review of 3D CNNs, discussing their operation, applications in medical imaging, and implementation issues. The article also discusses how 3D CNNs have enhanced the speed and accuracy of medical image processing, ultimately transforming the field of medical diagnostics and therapy.

Background

The necessity to integrate volumetric data with depth information motivated the move from 2D to 3D Convolutional Neural Networks. The authors of Refs. [3,4,5] were among the first to use CNNs for 3D data, using 3D CNNs to recognize human actions in films. They demonstrated 3D CNNs’ advantage in collecting both spatial and temporal correlations inside videos, paving the path for additional study and applications in this field.
The concept of 3D imaging is not new in the field of medical imaging. Volumetric, 3D pictures are generated by medical imaging techniques such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET) [6]. However, the use of 3D CNNs in medical imaging has just recently begun to gain traction. Early uses of 3D CNNs in medical imaging focused on segmentation tasks, such as brain tissue segmentation in MR images [7] or lung nodule segmentation in CT images [8]. More recent uses have expanded to include disease identification and treatment response prediction [4].

2. Methodology

This comprehensive study examined peer-reviewed articles from reputable sources to highlight 3D Convolutional Neural Networks (CNNs) in medical imaging. Relevance to the primary theme, publication venue reputation, and field contribution were the selection criteria. The technique examined CNNs’ practical benefits in improving diagnostic precision and efficiency. This structured approach highlighted the importance of 3D CNNs in medical diagnostics, in particular advanced pathological tissue segmentation and dynamic imaging data analysis, which could revolutionize medical imaging through improved accuracy and processing time.

3. Discussion

3.1. Review Highlights on Convolutional Neural Network

Since their debut, Convolutional Neural Networks (CNNs) have been at the vanguard of significant advances in image processing. Their organization is inspired by the human visual cortex, including spatial hierarchies and spatial correlations in images [2]. Convolutional layers, pooling layers, and fully linked layers are the main components of a conventional CNN. Convolutional layers employ a set of learnable filters or kernels that generate a set of feature maps that capture specific features in the image when convolved with the input image. Pooling layers helps to reduce spatial dimensions, making the network insensitive to tiny picture translations. Finally, the fully connected layers combine these features to execute the final goal, such as classification or regression [9].
CNNs are noteworthy in their capacity to automatically build hierarchical feature representations from data, eliminating the need for manual feature extraction, which was common in older machine-learning approaches. This has increased their applications in various disciplines, such as computer vision, natural language processing, and even audio signal processing.
CNN developments in recent years have been aimed at enhancing its efficiency, accuracy, and ability to handle complicated and large-scale data. EfficientNet and Vision Transformer [10] are two novel CNN architectures with state-of-the-art performance on various benchmarks. EfficientNet uses a fixed set of scaling coefficients to scale the network width, depth, and resolution, allowing it to achieve higher accuracy with fewer parameters. On the other hand, Vision Transformers use transformer topologies to analyze images, splitting them up into patches and treating them as a sequence, resulting in outstanding performance on large-scale datasets.
However, most of these advances have mostly been limited to processing 2D pictures. The potential of CNNs in processing volumetric data is huge and largely untapped, an area where 3D CNNs have begun to show promise, particularly in medical imaging.

3.2. 3D Convolutional Neural Network

By adding dimension to the convolutional filters, 3D Convolutional Neural Networks (3D CNNs) extend typical 2D CNNs to process volumetric input. The fundamental premise stays the same: efficient image processing requires using spatial hierarchies and shared weights. Three-dimensional CNNs, on the other hand, may capture depth-related information in addition to spatial features by accounting for the third dimension, making them an excellent choice for analyzing volumetric data [3].
Significant research has been undertaken since 2020 to optimize and adapt 3D CNNs for diverse purposes. Medical imaging is one such area that has received a lot of attention, with 3D CNNs being utilized for tasks including segmentation, detection, and classification of diseased tissues in volumetric pictures [4].
In a recent study, ref. [11] suggested a 3D CNN architecture for segmenting brain tumors from MRI data. The model was trained on a big dataset and produced outstanding results, surpassing other cutting-edge models on the same task. Similarly, ref. [8] used a 3D CNN to detect lung cancer early in CT scans. Using existing methods, the network was taught to recognize small, early-stage nodules that would otherwise be difficult to detect.
Due to the enormous dimensionality of the input, training 3D CNNs is computationally expensive and time consuming. Some academics have proposed hybrid models that mix 2D and 3D convolutions to address this issue. For example, ref. [7] introduced a mixed-scale dense network that employs both 2D and 3D convolutions, allowing the model to efficiently learn multi-scale features.
Furthermore, the addition of other approaches, such as transfer learning, has been demonstrated to dramatically improve the performance of 3D CNNs. Ref. [11] demonstrated enhanced performance compared to models learned from scratch by using pre-trained models as a starting point for training 3D CNNs for prostate cancer diagnosis.
Overall, while 3D CNNs have considerable potential, practical obstacles exist, such as computing demands and the need for vast amounts of labeled data. Addressing these difficulties is a current field of study.

3.3. Applications of 3D CNNs in Medical Imaging

Medical imaging techniques are crucial to diagnostic and therapeutic decision-making processes in modern healthcare. Techniques such as Positron Emission Tomography (PET), Computed Tomography (CT), and ultrasound (US) generate volumetric data and Magnetic Resonance Imaging (MRI), which contain rich three-dimensional spatial information. Applying 3D CNNs to these data allows for the automated detection, classification, and segmentation of anatomical structures and pathologies, providing opportunities for increased diagnostic accuracy and efficiency [4]. Figure 1 shows the Generic 3D CNN process flow.

3.4. Overview of Medical Imaging Techniques and the Role of 3D CNNs

CT and MRI: CT and MRI scans provide detailed cross-sectional views of body tissues. While 2D CNNs can process individual slices, the 3D context is lost. Three-dimensional CNNs retain this context, providing a more holistic understanding of the spatial relationship between structures. A 2021 study by [11] demonstrated a 3D CNN’s effectiveness in MRI-based brain tumor segmentation, surpassing traditional methods in both speed and accuracy.
PET: In PET imaging, 3D CNNs can help analyze the distribution and intensity of radiotracers, assisting in detecting and quantifying the severity of diseases such as cancer. A study successfully applied 3D CNNs to PET images to detect early stages of Alzheimer’s disease.
Ultrasound: In ultrasound imaging, 3D CNNs can help in the speckle noise reduction and segmentation of structures, such as fetal anatomy, in volumetric ultrasound data.
One specific example of 3D CNNs in action is the segmentation of bladder layers in pathological cases. Given the intricate structure of the bladder, manual segmentation is a time-consuming task requiring high expertise. However, 3D CNNs provide a promising solution. A study used a 3D CNN model for automated bladder segmentation from MRI images, achieving high accuracy and significantly reducing the segmentation time.
Speed: A study demonstrated the efficacy of a 3D CNN in lung cancer detection using CT images. The processing time was reduced to a few seconds per case from several minutes, allowing for high-throughput screening.
Accuracy: applied transfer learning with 3D CNNs for prostate cancer detection from MRI images. The model outperformed traditional methods in detecting cancerous regions, improving diagnostic accuracy.
Three-dimensional CNNs have shown significant potential for improving the processing and interpretation of medical imaging data. With ongoing research and advancements, their role in medical imaging is anticipated to grow even further.
The compilation in Table 1 presents a concise overview of several scholarly articles that employ Convolutional Neural Networks (CNNs) in the context of medical imaging applications.
The studies center their attention on various medical disorders and imaging modalities. These metrics help assess how well the model is performing in terms of correctly classifying different medical conditions or segments within medical images. A high level of sensitivity indicates that the model is effective in identifying positive cases. while accuracy can simply result from predicting the majority class [16]. When evaluating a 3D CNN for imaging, it is crucial to consider the problem context and evaluation goals. In tasks, it becomes necessary to prioritize sensitivity over specificity. The F1 score provides an assessment of precision which is particularly valuable when dealing with imbalanced classes [17]. As an illustration, a particular study employs Magnetic Resonance Imaging (MRI) to identify brain tumors, yielding a precision rate of 97.8% and a sensitivity rate of 97.32%. In another study, CT scans were employed as a screening tool for detecting lung nodules, resulting in a sensitivity rate of 86.30% and a specificity rate of 91.91%. Furthermore, scholars have employed Convolutional Neural Networks (CNNs) to identify Alzheimer’s disease through the utilization of Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) neuroimaging techniques. These endeavors have resulted in the attainment of accuracies amounting to 89.21% and 97.35% for MRI and PET, respectively. Convolutional Neural Networks (CNNs) have been utilized in breast cancer diagnosis to analyze ultrasound pictures. These CNNs have demonstrated sensitivities that range from 95.06% to 97.62%. These studies provide evidence for the promising capabilities of Convolutional Neural Networks (CNNs) in automating the processing of medical images and enhancing the accuracy of diagnoses for various crucial medical illnesses.

3.5. Challenges and Solutions in Using 3D CNNs in Medical Imaging

Despite the apparent benefits of adopting 3D CNNs in medical imaging, there are significant hurdles that academics and practitioners must overcome before they can be effectively implemented. These difficulties are usually divided into data requirements, computing demands, and model interpretability.
Three-dimensional CNNs, like other deep learning models, require a considerable quantity of labeled data to train. Because of privacy considerations, acquiring and annotating medical photographs can be time consuming, costly, and challenging [27]. Furthermore, class imbalance might impact the model’s performance when particular conditions or features are under-represented in the dataset. To address data restrictions, techniques such as data augmentation and synthetic data production have been explored. Transfer learning, which involves pre-training models on vast, diverse datasets and then fine-tuning them for specific tasks, has also shown promise in solving data scarcity difficulties.
Due to the enormous dimensionality of the input, training 3D CNNs is computationally demanding. This may necessitate additional training time and the purchase of specific hardware. To solve these issues, many strategies have been implemented. For effective feature extraction, one strategy is to use mixed-scale networks that incorporate 2D and 3D convolutions. Another option is to leverage cloud-based platforms with scalable computational resources.
Model Interpretability: Despite its efficacy, 3D CNNs are sometimes considered “black boxes” with difficult-to-understand decision-making processes. This lack of openness may impede the practical implementation of these models [28]. Attention mechanisms, which highlight the regions in the image that the model focuses on for its decisions and explain ability methods, such as Layer-wise Relevance Propagation (LRP) [29], have been proposed to increase the interpretability of 3D CNNs.
While 3D CNNs provide enormous prospects for improving medical imaging analysis, addressing these obstacles is critical for widespread implementation. More research and development in this field are required to fully exploit the promise of 3D CNNs in medical imaging.

3.6. Future Perspectives

Resources and machine-learning techniques improve. In this section, we will look at probable future breakthroughs and how they might affect the efficiency and effectiveness of medical diagnostics.
Federated Learning: Acquiring a large amount of different data while maintaining patient privacy is a significant challenge in deep learning applications for healthcare. Federated learning, a decentralized learning strategy that allows models to learn from data from many institutions without sharing raw data, has the potential to transform how models like 3D CNNs are trained.
Real-time Application: We may see more real-time uses of 3D CNNs in medical imaging. These models, for example, might be employed in surgical navigation systems or incorporated into radiology workflows, providing radiologists or surgeons with real-time feedback.
Tailored treatment: 3D CNNs have the potential to play a critical role in the realization of tailored treatment. These models could assist in forecasting the disease development or therapy response on an individual level by learning from a patient’s historical imaging data, leading to tailored treatment strategies.
Multimodal Learning: Combining data from different imaging modalities (such as MRI, CT, and PET) might reveal additional information about a patient’s health. Future research may develop 3D CNNs capable of effectively integrating multimodal data for improved diagnosis and prognosis.
Quantum Computing: As quantum computing matures, it may supply the computational power required by 3D CNNs to handle massive volumes of data. Although actual implementation remains a barrier, this advancement could lead to much shorter training times and more sophisticated models [30].
These possible advancements could significantly increase the efficiency and effectiveness of medical imaging, allowing for faster and more accurate diagnosis and providing insights into individualized treatment regimens. However, it is critical to remember that the ethical and privacy concerns associated with data use must also be addressed. Due to the requirement for extensive and varied datasets, the restricted interpretability of deep learning models, and the possibility of biases in the training data, ethical issues with CNN in medical imaging arise. However, the performance of CNN-based models depends on having access to a large variety of medical imaging data [31,32,33]. CNN-based models have demonstrated considerable gains in image analysis and classification tasks. There are issues with medical picture segmentation using CNNs, such as a lack of labeled data and a significant class imbalance in the ground truth, that must be resolved [34]. Although accuracy has increased thanks to the use of CNN models for CT image identification, there is a chance of overfitting and poor recognition accuracy [35]. The changing nature of healthcare delivery has brought new ethical dilemmas, and healthcare professionals in medical imaging need to be prepared to respond to these challenges [36,37,38,39].

4. Conclusions

Three-dimensional Convolutional Neural Networks (3D CNNs) have emerged as a disruptive force in medical imaging, significantly increasing efficiency and accuracy when processing and analyzing multidimensional and moving pictures. This article thoroughly examined the application, advances, problems, and prospects of 3D CNNs in medical imaging, demonstrating the vast capabilities of this technology. The review emphasized the extraordinary evolution of 3D CNNs, processes, and better picture dataset-fitting capabilities over older methods. An investigation into the use of 3D CNNs in medical imaging indicated considerable gains in speed and accuracy in identifying diseased tissues, and their function in precision medicine through bladder layer segmentation is noteworthy. Despite these advances, data requirements, processing demands, and model interpretability still need to be solved. However, proposed solutions such as federated learning, attention mechanisms, and explainable AI methods show promise in tackling these issues and accelerating progress. With advances like real-time applications, personalized medicine, multi-modal learning, and the arrival of quantum computing on the horizon, the future of 3D CNNs in medical imaging is bright. These advances can reshape medical imaging and diagnostic paradigms, paving the way for considerable patient care improvements. Finally, 3D CNNs are a significant step forward in the application of AI in medical imaging. Their ability to analyze complicated images with incredible speed and accuracy makes them vital in advancing medical imaging and healthcare in general.

Author Contributions

Investigation and literature review, S.T. and G.J.; methodology, S.T.; conceptualized and supervision, D.K.S.; data synthesis, M.S.; theoretical foundation and important information, J.M.B.; writing—review and editing, S.R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Date sharing does not applicable to this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A guide to deep learning in Healthcare. Nat. Med. 2019, 25, 24–29. [Google Scholar] [CrossRef] [PubMed]
  2. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  3. Ji, S.; Xu, W.; Yang, M.; Yu, K. 3d Convolutional Neural Networks for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef] [PubMed]
  4. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J. Artificial Intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
  5. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  6. Zhou, T.; Ruan, S.; Canu, S. A review: Deep Learning for Medical Image segmentation using multi-modality fusion. Array 2019, 6, 100004. [Google Scholar] [CrossRef]
  7. Dolz, J.; Desrosiers, C.; Ben Ayed, I. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study. NeuroImage 2018, 170, 456–470. [Google Scholar] [CrossRef]
  8. Setio, A.A.; Ciompi, F.; Litjens, G.; Gerke, P.; Jacobs, C.; van Riel, S.J.; Wille, M.M.; Naqibullah, M.; Sanchez, C.I.; van Ginneken, B. Pulmonary nodule detection in CT images: False positive reduction using multi-view convolutional networks. IEEE Trans. Med. Imaging 2016, 35, 1160–1169. [Google Scholar] [CrossRef]
  9. Rawat, W.; Wang, Z. Deep convolutional neural networks for Image Classification: A Comprehensive Review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef]
  10. Dosovitskiy, A.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar] [CrossRef]
  11. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. UNet++: A nested U-Net Architecture for Medical Image segmentation. In Proceedings of the Workshop of Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Granada, Spain, 20 September 2018; pp. 3–11. [Google Scholar] [CrossRef]
  12. Baumgärtner, G.L.; Hamm, C.A.; Schulze-Weddige, S.; Ruppel, R.; Beetz, N.L.; Rudolph, M.; Dräger, F.; Froböse, K.P.; Posch, H.; Lenk, J.; et al. Metadata-independent classification of MRI sequences using convolutional neural networks: Successful application to Prostate MRI. Eur. J. Radiol. 2023, 166, 110964. [Google Scholar] [CrossRef]
  13. Tufail, A.B.; Anwar, N.; Othman, M.T.; Ullah, I.; Khan, R.A.; Ma, Y.-K.; Adhikari, D.; Rehman, A.U.; Shafiq, M.; Hamam, H. Early-stage alzheimer’s disease categorization using pet neuroimaging modality and convolutional neural networks in the 2D and 3D domains. Sensors 2022, 22, 4609. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, J.; Zheng, B.; Gao, A.; Feng, X.; Liang, D.; Long, X. A 3D densely connected convolution neural network with connection-wise attention mechanism for alzheimer’s disease classification. Magn. Reson. Imaging 2021, 78, 119–126. [Google Scholar] [CrossRef] [PubMed]
  15. Hogan, R.; Christoforou, C. Alzheimer’s detection through 3D Convolutional Neural Networks. In Proceedings of the FLAIRS-34, Clearwater Beach, FL, USA, 14–17 May 2023. [Google Scholar] [CrossRef]
  16. Ebrahimi, A.; Luo, S. Disease Neuroimaging Initiative, for the Convolutional neural networks for alzheimer’s disease detection on MRI images. J. Med. Imaging 2021, 8, 024503. [Google Scholar] [CrossRef] [PubMed]
  17. Dehkordi, A.A.; Hashemi, M.; Neshat, M.; Mirjalili, S.; Sadiq, A.S. Brain tumor detection and classification using a new evolutionary convolutional neural network. arXiv 2022, arXiv:2204.12297. [Google Scholar] [CrossRef]
  18. Pradhan, A.; Sarma, B.; Dey, B.K. Lung cancer detection using 3d Convolutional Neural Networks. In Proceedings of the 2020 International Conference on Computational Performance Evaluation (ComPE), Shillong, India, 2–4 July 2020. [Google Scholar] [CrossRef]
  19. Lima, T.J.; Ushizima, D.; de Carvalho Filho, A.O.; de Araujo, F.H. Lung CT screening with 3D convolutional neural network architectures. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops), Iowa City, IA, USA, 4 April 2020. [Google Scholar] [CrossRef]
  20. Ahmed, T.; Parvin, M.S.; Haque, M.R.; Uddin, M.S. Lung cancer detection using CT image based on 3D Convolutional Neural Network. J. Comput. Commun. 2020, 8, 35–42. [Google Scholar] [CrossRef]
  21. Mengash, H.A.; Mahmoud, H.A.H. Brain cancer tumor classification from motion-corrected MRI images using convolutional neural network. Comput. Mater. Contin. 2021, 68, 1551–1563. [Google Scholar] [CrossRef]
  22. Zhou, Y.; Chen, H.; Li, Y.; Wang, S.; Cheng, L.; Li, J. 3D multi-view tumor detection in automated whole breast ultrasound using deep convolutional neural network. Expert Syst. Appl. 2021, 168, 114410. [Google Scholar] [CrossRef]
  23. Majidpourkhoei, R.; Alilou, M.; Majidzadeh, K.; Babazadehsangar, A. A novel deep learning framework for lung nodule detection in 3D CT images. Multimed. Tools Appl. 2021, 80, 30539–30555. [Google Scholar] [CrossRef]
  24. Meng, H.; Li, Q.; Liu, X.; Wang, Y.; Niu, J. Multi-scale view-based convolutional neural network for breast cancer classification in ultrasound images. In Proceedings of the SPIE Medical Imaging 2021: Computer-Aided Diagnosis, Online, 15–20 February 2021. [Google Scholar] [CrossRef]
  25. Dwivedi, S.; Goel, T.; Sharma, R.; Murugan, R. Structural MRI based alzheimer’s disease prognosis using 3D convolutional neural network and support vector machine. In Proceedings of the 2021 Advanced Communication Technologies and Signal Processing (ACTS), Rourkela, India, 15–17 December 2021. [Google Scholar] [CrossRef]
  26. Yang, H.; Wang, X.; Tan, J.; Liu, G.; Sun, X.; Li, Y. A breast ultrasound tumor detection framework using Convolutional Neural Networks. In Proceedings of the 2022 2nd International Conference on Bioinformatics and Intelligent Computing, Hyderabad, India, 27–28 December 2022. [Google Scholar] [CrossRef]
  27. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  28. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef] [PubMed]
  29. Ancona, M.; Ceolini, E.; Öztireli, C.; Gross, M. Towards better understanding of gradient-based attribution methods for Deep Neural Networks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikola, HI, USA, 27–29 March 2017; pp. 287–296. [Google Scholar] [CrossRef]
  30. Cong, I.; Choi, S.; Lukin, M.D. Quantum Convolutional Neural Networks. Nat. Phys. 2019, 18, 264–270. [Google Scholar] [CrossRef]
  31. Rajpurkar, P.; Irvin, J.; Ball, R.L.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.P.; et al. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLOS Med. 2018, 18, e1003673. [Google Scholar] [CrossRef] [PubMed]
  32. Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Beach, CA, USA, 10–15 June 2019; Volume 97, pp. 6105–6114. [Google Scholar]
  33. Salehi, A.W.; Khan, S.; Gupta, G.; Alabduallah, B.I.; Almjally, A.; Alsolai, H.; Siddiqui, T.; Mellit, A. A study of CNN and transfer learning in medical imaging: Advantages, challenges, future scope. Sustainability 2023, 15, 5930. [Google Scholar] [CrossRef]
  34. Medical Image Segmentation Based on Mixed Context CNN Model (2021): He Xuejian. SciSpace—Paper. 18 February 2021. Available online: https://typeset.io/papers/medical-image-segmentation-based-on-mixed-context-cnn-model-43sev48ks4 (accessed on 31 August 2021).
  35. Kayalibay, B.; Jensen, G.; van der Smagt, P. CNN-based segmentation of Medical Imaging Data. arXiv 2017, arXiv:1701.03056. [Google Scholar]
  36. Pettigrew, A. Ethical issues in medical imaging: Implications for the curricula. Radiography 2000, 6, 293–298. [Google Scholar] [CrossRef]
  37. Hameed, B.M.; Dhavileswarapu, A.V.S.; Raza, S.Z.; Karimi, H.; Khanuja, H.S.; Shetty, D.K.; Ibrahim, S.; Shah, M.J.; Naik, N.; Paul, R.; et al. Artificial Intelligence and its impact on urological diseases and management: A Comprehensive Review of the literature. J. Clin. Med. 2021, 10, 1864. [Google Scholar] [CrossRef]
  38. Subrahmanya, S.V.; Shetty, D.K.; Patil, V.; Hameed, B.M.; Paul, R.; Smriti, K.; Naik, N.; Somani, B.K. The role of Data Science in healthcare advancements: Applications, benefits, and future prospects. Ir. J. Med. Sci. (1971-) 2021, 191, 1473–1483. [Google Scholar] [CrossRef]
  39. Hameed, B.Z.; Naik, N.; Ibrahim, S.; Tatkar, N.S.; Shah, M.J.; Prasad, D.; Hegde, P.; Chlosta, P.; Rai, B.P.; Somani, B.K. Breaking barriers: Unveiling factors influencing the adoption of artificial intelligence by healthcare providers. Big Data Cogn. Comput. 2023, 7, 105. [Google Scholar] [CrossRef]
Figure 1. Generic 3D CNN process flow.
Figure 1. Generic 3D CNN process flow.
Engproc 59 00003 g001
Table 1. Highlights from the literature reviewed on CNN and different medical image processing.
Table 1. Highlights from the literature reviewed on CNN and different medical image processing.
Ref.Author/sType of
Imaging/
Modality
Medical Condition StudiedResults/
Performance
[12](Baumgärtner et al., 2023)MRIProstate cancerAccuracy: 97.43%
[13](Tufail et al., 2022)PETAlzheimer’s diseaseAccuracy: 89.21%
[14](Zhang et al., 2021)MRIAlzheimer’s diseaseAccuracy: 97.35%
[15](Hogan et al., 2021)MRIAlzheimer’s diseaseAccuracy: 79%
[16](Ebrahimi et al., 2021)MRIAlzheimer’s diseaseSpecificity: 94.12%
Accuracy: 96.88%
Sensitivity: 100%
[17](Dehkordi et al., 2022)MRIBrain tumorSpecificity: 98.6%
Accuracy: 97.4%
Sensitivity: 96%
F1-Score: 96.6%
[18](Pradhan et al., 2020)CT scanLung nodulesAccuracy: 83.33%
[19](Lima et al., 2020)CT scanLung nodulesSensitivity: 86.30%
Specificity: 91.91%
[20](Ahmed et al., 2020)CT ScanLung cancerAccuracy: 80%
[21](Abdullah Mengash and A. Hosni Mahmoud, 2021)MRIBrain tumorsAccuracy: 97.8%
Specificity: 99.2%
Sensitivity: 97.32%
[22](Zhou et al., 2021)UltrasoundBreast cancerSensitivity: 95.06%
[23](Majidpourkhoei et al., 2021)CT scanLung nodulesAccuracy: 90.1%
Specificity: 91.7%
Sensitivity: 84.1%
[24](Meng et al., 2021)UltrasoundBreast cancerAccuracy: 90.7%
[25](Dwivedi et al., 2021)MRIAlzheimer’s diseaseSpecificity: 90.0%
Accuracy: 91.85%
Sensitivity: 95.56%
F1-Score: 88.66%
[26](Yang et al., 2021)UltrasoundBreast cancerSensitivity: 97.62%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tiwari, S.; Jain, G.; Shetty, D.K.; Sudhi, M.; Balakrishnan, J.M.; Bhatta, S.R. A Comprehensive Review on the Application of 3D Convolutional Neural Networks in Medical Imaging. Eng. Proc. 2023, 59, 3. https://doi.org/10.3390/engproc2023059003

AMA Style

Tiwari S, Jain G, Shetty DK, Sudhi M, Balakrishnan JM, Bhatta SR. A Comprehensive Review on the Application of 3D Convolutional Neural Networks in Medical Imaging. Engineering Proceedings. 2023; 59(1):3. https://doi.org/10.3390/engproc2023059003

Chicago/Turabian Style

Tiwari, Satyam, Goutam Jain, Dasharathraj K. Shetty, Manu Sudhi, Jayaraj Mymbilly Balakrishnan, and Shreepathy Ranga Bhatta. 2023. "A Comprehensive Review on the Application of 3D Convolutional Neural Networks in Medical Imaging" Engineering Proceedings 59, no. 1: 3. https://doi.org/10.3390/engproc2023059003

Article Metrics

Back to TopTop