Next Article in Journal
The Impact of Multimodal Communication on a Shared Mental Model, Trust, and Commitment in Human–Intelligent Virtual Agent Teams
Next Article in Special Issue
Review of Deep Learning Methods in Robotic Grasp Detection
Previous Article in Journal
Analyzing Iterative Training Game Design: A Multi-Method Postmortem Analysis of CYCLES Training Center and CYCLES Carnivale
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning and Medical Diagnosis: A Review of Literature

Technical Faculty “Mihajlo Pupin” in Zrenjanin, University of Novi Sad, Djure Djakovica bb, 23000 Zrenjanin, Serbia
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2018, 2(3), 47; https://doi.org/10.3390/mti2030047
Submission received: 20 June 2018 / Revised: 10 August 2018 / Accepted: 14 August 2018 / Published: 17 August 2018
(This article belongs to the Special Issue Deep Learning)

Abstract

:
In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.

1. Introduction

Neural networks have advanced at a remarkable rate, and they have found practical applications in various industries [1]. Deep neural networks define inputs to outputs through a complex composition of layers which present building blocks including transformations and nonlinear functions [2]. Now, deep learning can solve problems which are hardly solvable with traditional artificial intelligence [3]. Deep learning can utilize unlabeled information during training; it is thus well-suited to addressing heterogeneous information and data, in order to learn and acquire knowledge [4]. The applications of deep learning may lead to malicious actions, however the positive use of this technology is much broader. Back in 2015, it was noted that deep learning has a clear path towards operating with large data sets, and thus, the applications of deep learning are likely to be broader in the future [3]. A large number of newer studies have highlighted the capabilities of advanced deep learning technologies, including learning from complex data [5,6], image recognition [7], text categorization [8] and others. One of the main applications of deep learning is for medical diagnosis [9,10]. This includes but is not limited to health informatics [11], biomedicine [12], and magnetic resonance image MRI analysis [13]. More specific uses of deep learning in the medical field are segmentation, diagnosis, classification, prediction, and detection of various anatomical regions of interest (ROI). Compared to traditional machine learning, deep learning is far superior as it can learn from raw data, and has multiple hidden layers which allow it to learn abstractions based on inputs [5]. The key to deep learning capabilities lies in the capability of the neural networks to learn from data through general purpose learning procedure [5].
The main goal of this review is to address the applications of deep learning in medical diagnosis in a concise and simple manner. Why is this important? It was noticed that a large number of scientific papers define various applications of deep learning in great detail. However, the number of papers that actually provide a concise review of deep learning application in medical diagnosis are scarce. Scientific terminology in the domain of deep learning can be confusing for researchers outside of this topic. This review paper provides a concise and simple approach to deep learning applications in medical diagnosis, and it can moderately contribute to the existing body of literature. The following research questions are used as guidelines for this article:
  • How diverse is the application of deep learning in the field of medical diagnosis?
  • Can deep learning substitute the role of doctors in the future?
  • Does deep learning have a future or will it become obsolete?
This paper includes three main sections. In the first section the research methodology is described. Afterwards, the review of deep learning application in medical diagnosis is addressed. Finally, the results are discussed, conclusions are drawn, and future research is suggested.

2. Method

2.1. Flow Diagram of the Research

The research process is in accordance with the PRISMA flow diagram and protocol [14], and depicts the conducted steps from identifying articles to eligible articles for further analysis. The mentioned flow diagram is shown in Figure 1.
There are four main sections in the flow diagram. Firstly, article identification is conducted. This includes acquiring articles from various sources. The next section of the diagram includes the screening process. Article duplicates were excluded. Furthermore, the articles are screened once more and inadequate articles are removed. In the third section, full-articles were analyzed in order to determine the eligibility of the articles for further review. Ineligible articles were excluded from further review. The fourth and final section includes studies/articles that were thoroughly analyzed.

2.2. Literature Sources

In order to investigate the applications of deep learning in medical diagnosis, 263 articles published in the domain were analyzed. The main sources of these articles are presented in Table 1.
These journals were chosen so that the credibility of this review paper is not compromised. However, there is a wide variety of other literature sources that are also adequate for this review.

2.3. Data Collection Process

The data collection process included extensive research of articles that addressed the applications of deep learning in the medical field. These articles were downloaded and analyzed in order to acquire sufficient theoretical information on the subject. The results in this paper are qualitative in nature, and the main focus is to review the applications of deep learning, and to answer the research questions which were outlined in the introduction section of this paper. In sum, the data collection process was conducted in four main phases:
  • Phase 1: Searching articles in credible journals. This included the use of keywords presented under the Section 2.4 of this paper. At this point the articles were thoroughly analyzed.
  • Phase 2: Analyzing the literature and excluding articles that do not fit the eligibility criteria. As there was no special screening during the search process, at this point the articles were analyzed and selected for further analysis.
  • Phase 3: Thorough analysis of eligible articles conducted and the qualitative data classified in accordance with the aim of the review. At this stage there was a possibility of bias towards clearly written and conducted research articles.
  • Phase 4: Qualitative data obtained and notes taken in order to concisely present the data in the results section of this paper. Data was collected in the form remarks and notes of what type of data and methods were used, and on what applications.

2.4. Obtained Literature and Eligibility Criteria

When the necessary literature for this systematic review was gathered, it was important to include various fields where deep learning is practically used. Therefore, the following keywords were used in the search engine:
  • deep learning practical applications
  • deep learning and medical diagnosis
  • deep learning and MRI
  • deep learning CT
  • deep learning segmentation in medicine
  • deep learning classification in medicine
  • deep learning diagnosis medicine
  • deep learning application medicine
This way it was ensured that a wide variety of articles will be included in the review. The year of article publication was also considered; the earliest article dates from 2014, while the majority of other reviewed articles are from 2016, 2017 and 2018. However, for the introduction section of this review, earlier articles were also addressed.

2.5. Risk of Bias in Individual Studies

There was no major bias during the data analysis. However, if an article was not about the application of deep learning in the field of medical diagnosis or medicine in general, it was then excluded from further analysis. This type of review paper allows the inclusion of articles, regardless of sample size, location, and data. There may seem to be a minor bias towards articles that address deep learning applications in medicine, particularly in cancer detection. However, this is due to the sheer number of articles that is much higher in this specific domain, as compared to other diseases. Therefore, this minor bias does not have a major impact, or indeed, any impact, on the obtained results.

3. Results

When it comes to deep learning and its application for medical diagnosis, there are two main approaches. The first approach is classification that includes reducing potential outcomes (diagnosis) by mapping data to specific outcomes. The second approach is physiological data which includes medical images and data from other sources are used to identify and diagnose tumors, or other diseases [15]. In addition, deep learning can be used for dietary assessment support [16]. For a certainty, deep learning is applied in various ways when it comes to medical diagnosis.
Brief reviews of individual articles in the domain of deep learning and medical diagnosis are given in Table 2.
Furthermore, the synthesis of the results is presented in Table 3.
In the next section the results are discussed.

4. Discussion

Discussing the Results

The main goal of this paper was to review various articles in the domain of deep learning application in medical diagnosis. After analyzing more than 300 articles, 46 were further examined, and the individual results of each article were presented. There was no need for quantitative data analysis, as the nature of this review was to present the variety of deep learning uses in the medical field. The synthesis of data was conducted in a simple way. Some of the methods used for synthesis were in accordance with other similar studies [63,64,65]. According to the gathered data, the most widely used deep learning method is convolutional neural networks (CNNs). In addition, MRI was most frequently used as training data. When it comes to the specific use, segmentation is the most represented. It is important to note, that the article review and analysis was biased towards newer (published 2015 and later) articles, and articles that included “deep learning” in the title. It can be seen that there is a large variety in the type of data that is used to train and apply deep neural networks. CT scan images, MRIs, fundus photography and other types of data can be used for expert-level diagnosis. However, as noted in other studies, neural networks use energy to activate neurons. With the human brain, during the thought process only a small number of neurons are active, while the neighboring neurons are shut down until needed. Communication “costs” are reduced through single-task allocation for neighboring neurons [65]. It is expected that artificial neural networks will further develop in the future, thus managing to complete more complex tasks.
The concise nature of this review can moderately contribute to the existing body of literature. The aim was to provide an objective, simple and a concise article. The individual research results provide sufficient information and insight into the applications of deep learning for detecting, classifying, segmenting and diagnosing various diseases and abnormalities in specific anatomical regions of interest (ROI). Without a doubt deep learning application in the medical field will further develop as it has already achieved remarkable results in medical image analysis [66], and more precisely, in image-based cancer detection and diagnosis [67]. This may increase the efficiency and quality of healthcare in the long-run, thus reducing the risk of late-diagnosis of serious diseases. However, as mentioned before, there is still a long way to go before general purpose neural networks will be commercially relevant. Finally, it is expected that artificial intelligence will “rise” through the combination of representation learning and complex reasoning [3].

5. Conclusions

5.1. Research Questions

In the introduction section of this review, three main research questions were investigated:
● How diverse is the application of deep learning in the field of medical diagnosis?
Deep learning methods have a wide application in the medical field. In this case, medical diagnosis is conducted through use-cases of deep learning networks. As mentioned before, these include detection, segmentation, classification, prediction and other. The results of the reviewed studies indicate that deep learning methods can be far superior in comparison to other high-performing algorithms. Therefore, it is safe to assume that deep learning is and will continue to diversify its uses.
● Can deep learning substitute the role of doctors in the future?
The future development of deep learning promises more applications in various fields of medicine, particularly in the domain of medical diagnosis. However, in the current state, it is not evident that deep learning can substitute the role of doctors/clinicians in medical diagnosis. So far, deep learning can provide good support for experts in the medical field.
● Does deep learning have a future or will it become obsolete?
All indicators point towards an even wider use of deep learning in various fields. Deep learning has already found its application in transportation and greenhouse-gas emission control [68], traffic control [69], text classification [8,70], object detection [71], speech detection [72,73], translation [74] and in other fields. These applications were not so represented in the past. Traditional approaches to various similarity measures are ineffective when compared to deep learning [63]. Based on these findings, it can be suggested that deep learning and deep neural networks will prevail, and that they will find many other uses in the near future.

5.2. Limitations and Future Research

The main limitation of this paper is the absence of meta-analysis of quantitative data. However, considering the main goal of this paper, this limitation does not devalue the contribution of the review. For future research, a more categorized review should be conducted. In addition, the development and application of deep learning through defined periods of time could be added. A theoretical introduction to future reviews is also recommended. In this case, the theoretical background did not contain a detailed explanation of how deep neural networks function. However, given the nature of the review, and the target audience (researchers whose domain of expertise is not deep learning focused), such a theoretical approach was not deemed necessary.

Author Contributions

M.B. conducted the investigation, data curation, and writing of the original draft. D.R. contributed in the form of supervision, conceptualization, and methodology.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Szegedy, C.; Wei, L.; Yang, J.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  2. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar]
  3. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 436–444. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, X.-W.; Lin, X. Big data deep learning: Challenges and perspectives. IEEE Access 2014, 2, 514–525. [Google Scholar] [CrossRef]
  5. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Brief Bioinform. 2017. [Google Scholar] [CrossRef] [PubMed]
  6. Wei, J.; He, J.; Chen, K.; Zhou, Y.; Tang, Z. Collaborative filtering and deep learning based recommendation system for cold start items. Expert Syst. Appl. 2017, 69, 29–39. [Google Scholar] [CrossRef]
  7. Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed]
  8. Song, J.; Qin, S.; Zhang, P. Chinese text categorization based on deep belief networks. In Proceedings of the 2016 IEEE/ACIS 15th International Conference on Computer and Information Science Computer and Information Science (ICIS), Okayama, Japan, 26–29 June 2016; pp. 1–5. [Google Scholar]
  9. Lee, J.G.; Jun, S.; Cho, Y.W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep Learning in Medical Imaging: General Overview. Korean J. Radiol. 2017, 18, 570–584. [Google Scholar] [CrossRef] [PubMed]
  10. Suzuki, K. Overview of deep learning in medical imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [Google Scholar] [CrossRef] [PubMed]
  11. Ravì, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.-Z. Deep learning for health informatics. IEEE J. Biomed. Health Inf. 2017, 21, 4–21. [Google Scholar] [CrossRef] [PubMed]
  12. Mamoshina, P.; Vieira, A.; Putin, E.; Zhavoronkov, A. Applications of Deep Learning in Biomedicine. Mol. Pharm. 2016, 13, 1445–1454. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, J.; Pan, Y.; Li, M.; Chen, Z.; Tang, L.; Lu, C.; Wang, J. Applications of deep learning to MRI images: A survey. Big Data Mining Anal. 2018, 1, 1–18. [Google Scholar] [CrossRef]
  14. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Int. J. Surg. 2010, 8, 336–341. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Greenspan, H.; Van Ginneken, B.; Summers, R.M. Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique. IEEE Trans. Med. Imaging 2016, 35, 1153–1159. [Google Scholar] [CrossRef]
  16. Mezgec, S.; Koroušić Seljak, B. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment. Nutrients 2017, 9, 657. [Google Scholar] [CrossRef] [PubMed]
  17. De Vos, B.D.; Wolterink, J.M.; de Jong, P.A.; Viergever, M.A.; Išgum, I. 2D image classification for 3D anatomy localization: Employing deep convolutional neural networks. In Proceedings of the Medical Imaging 2016: Image Processing, San Diego, CA, USA, 1–3 March 2016; Volume 9784. [Google Scholar]
  18. Dou, Q.; Yu, L.; Chen, H.; Jin, Y.; Yang, X.; Qin, J.; Heng, P.A. 3D deeply supervised network for automated segmentation of volumetric medical images. Med Image Anal. 2017, 41, 40–54. [Google Scholar] [CrossRef] [PubMed]
  19. Pan, Y.; Huang, W.; Lin, Z.; Zhu, W.; Zhou, J.; Wong, J.; Ding, Z. Brain tumor grading based on neural networks and convolutional neural networks. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 699–702. [Google Scholar]
  20. Chen, X.; Xu, Y.; Wong, D.W.K.; Wong, T.Y.; Liu, J. Glaucoma detection based on deep convolutional neural network. In Proceedings of the 2015 37th Annual International Conference of the IEEE, Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 715–718. [Google Scholar]
  21. Payan, A.; Montana, G. Predicting Alzheimer’s disease: A neuroimaging study with 3D convolutional neural networks. arXiv 2015, arXiv:1502.02506. [Google Scholar]
  22. Dubrovina, A.; Kisilev, P.; Ginsburg, B.; Hashoul, S.; Kimmel, R. Computational mammography using deep neural networks. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2016, 6, 243–247. [Google Scholar] [CrossRef]
  23. Acharya, U.R.; Fujita, H.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M. Application of deep convolutional neural network for automated detection of myocardial infarction using ECG signals. Inf. Sci. 2017, 415, 190–198. [Google Scholar] [CrossRef]
  24. Mehta, R.; Majumdar, A.; Sivaswamy, J. BrainSegNet: A convolutional neural network architecture for automated segmentation of human brain structures. J. Med. Imaging (Bellingham) 2017, 4, 024003. [Google Scholar] [CrossRef] [PubMed]
  25. Abdel-Zaher, A.M.; Eldeib, A.M. Breast cancer classification using deep belief networks. Expert Syst. Appl. 2016, 46, 139–144. [Google Scholar] [CrossRef]
  26. Cheng, J.Z.; Ni, D.; Chou, Y.H.; Qin, J.; Tiu, C.M.; Chang, Y.C.; Huang, C.S.; Shen, D.; Chen, C.M. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Sci. Rep. 2016, 6, 24454. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
  28. Xu, Y.; Dai, Z.; Chen, F.; Gao, S.; Pei, J.; Lai, L. Deep Learning for Drug-Induced Liver Injury. J. Chem. Inf. Model. 2015, 55, 2085–2093. [Google Scholar] [CrossRef] [PubMed]
  29. Gao, X.; Lin, S.; Wong, T.Y. Automatic Feature Learning to Grade Nuclear Cataracts Based on Deep Learning. IEEE Trans. Biomed. Eng. 2015, 62, 2693–2701. [Google Scholar] [CrossRef] [PubMed]
  30. Bar, Y.; Diamant, I.; Wolf, L.; Greenspan, H. Deep learning with non-medical training used for chest pathology identification. In Proceedings of the Medical Imaging 2015: Computer-Aided Diagnosis, Orlando, FL, USA, 3–7 November 2015; Volume 9414. [Google Scholar]
  31. Masood, A.; Al-Jumaily, A.; Anam, K. Self-supervised learning model for skin cancer diagnosis. In Proceedings of the 2015 7th International IEEE/EMBS Conference Neural Engineering (NER), Montpellier, France, 22–24 April 2015; pp. 1012–1015. [Google Scholar]
  32. Han, Z.; Wei, B.; Zheng, Y.; Yin, Y.; Li, K.; Li, S. Breast Cancer Multi-classification from Histopathological Images with Structured Deep Learning Model. Sci. Rep. 2017, 7, 4172. [Google Scholar] [CrossRef] [PubMed]
  33. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Larochelle, H. Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Kamnitsas, K.; Ledig, C.; Newcombe, V.F.J.; Simpson, J.P.; Kane, A.D.; Menon, D.K.; Glocker, B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 2017, 36, 61–78. [Google Scholar] [CrossRef] [PubMed]
  35. Mohamed, A.A.; Berg, W.A; Peng, H.; Luo, Y.; Jankowitz, R.C.; Wu, S. A deep learning method for classifying mammographic breast density categories. Med. Phys. 2018, 45, 314–321. [Google Scholar] [CrossRef] [PubMed]
  36. Zhang, Q.; Xiao, Y.; Dai, W.; Suo, J.; Wang, C.; Shi, J.; Zheng, H. Deep learning based classification of breast tumors with shear-wave elastography. Ultrasonics 2016, 72, 150–157. [Google Scholar] [CrossRef] [PubMed]
  37. Cha, K.H.; Hadjiiski, L.; Samala, R.K.; Chan, H.P.; Caoili, E.M.; Cohan, R.H. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets. Med. Phys. 2016, 43, 1882. [Google Scholar] [CrossRef] [PubMed]
  38. Isensee, F.; Kickingereder, P.; Bonekamp, D.; Bendszus, M.; Wick, W.; Schlemmer, H.P.; Maier-Hein, K. Brain Tumor Segmentation Using Large Receptive Field Deep Convolutional Neural Networks. Bildverarb. Med. 2017, 86–91. [Google Scholar] [CrossRef]
  39. González, G.; Ash, S.Y.; Vegas-Sánchez-Ferrero, G.; Onieva Onieva, J.; Rahaghi, F.N.; Ross, J.C.; Washko, G.R. Disease staging and prognosis in smokers using deep learning in chest computed tomography. Am. J. Respir. Crit. Care Med. 2018, 197, 193–203. [Google Scholar] [CrossRef] [PubMed]
  40. Danaee, P.; Ghaeini, R.; Hendrix, D.A. A deep learning approach for cancer detection and relevant gene identification. Pac. Symp. Biocomput. 2017, 2017, 219–229. [Google Scholar]
  41. Looney, P.; Stevenson, G.N.; Nicolaides, K.H.; Plasencia, W.; Molloholli, M.; Natsis, S.; Collins, S.L. Automatic 3D ultrasound segmentation of the first trimester placenta using deep learning. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging, Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 279–282. [Google Scholar]
  42. Choi, H.; Jin, K.H. Fast and robust segmentation of the striatum using deep convolutional neural networks. J. Neurosci. Methods 2016, 274, 146–153. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Ehteshami Bejnordi, B.; Veta, M.; Johannes van Diest, P.; van Ginneken, B.; Karssemeijer, N.; Litjens, G.; Venancio, R. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer. JAMA 2017, 318, 2199–2210. [Google Scholar] [CrossRef] [PubMed]
  44. Kleesiek, J.; Urban, G.; Hubert, A.; Schwarz, D.; Maier-Hein, K.; Bendszus, M.; Biller, A. Deep MRI brain extraction: A 3D convolutional neural network for skull stripping. Neuroimage 2016, 129, 460–469. [Google Scholar] [CrossRef] [PubMed]
  45. Kumar, D.; Wong, A.; Clausi, D.A. Lung nodule classification using deep features in CT images. In Proceedings of the 2015 12th Conference on Robot VisionComputer and Robot Vision (CRV), Halifax, NS, Canada, 3–5 June 2015; pp. 133–138. [Google Scholar]
  46. Sun, W.; Zheng, B.; Qian, W. Computer aided lung cancer diagnosis with deep learning algorithms. In Proceedings of the Medical Imaging 2016: Computer-Aided Diagnosis, San Diego, CA, USA, 27 February–3 March 2016; Volime 9785. [Google Scholar]
  47. Rasti, R.; Teshnehlab, M.; Phung, S.L. Breast cancer diagnosis in DCE-MRI using mixture ensemble of convolutional neural networks. Pattern Recognit. 2017, 72, 381–390. [Google Scholar] [CrossRef]
  48. Anirudh, R.; Thiagarajan, J.J.; Bremer, T.; Kim, H. Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data. In Proceedings of the Medical Imaging 2016: Computer-Aided Diagnosis, San Diego, CA, USA, 27 February–3 March 2016; Volume 9785. [Google Scholar]
  49. Samala, R.K.; Chan, H.P.; Hadjiiski, L.M.; Cha, K.; Helvie, M.A. Deep-learning convolution neural network for computer-aided detection of microcalcifications in digital breast tomosynthesis. In Proceedings of the Medical Imaging 2016: Computer-Aided Diagnosis, San Diego, CA, USA, 27 February–3 March 2016; Volume 9785. [Google Scholar]
  50. Roth, H.R.; Farag, A.; Lu, L.; Turkbey, E.B.; Summers, R.M. Deep convolutional networks for pancreas segmentation in CT imaging. Med. Imaging Image Process. 2015, 9413, 94131G. [Google Scholar]
  51. Pratt, H.; Coenen, F.; Broadbent, D.M.; Harding, S.P.; Zheng, Y. Convolutional Neural Networks for Diabetic Retinopathy. Proced. Comput. Sci. 2016, 90, 200–205. [Google Scholar] [CrossRef]
  52. Dalmis, M.U.; Litjens, G.; Holland, K.; Setio, A.; Mann, R.; Karssemeijer, N.; Gubern-Merida, A. Using deep learning to segment breast and fibroglandular tissue in MRI volumes. Med. Phys. 2017, 44, 533–546. [Google Scholar] [CrossRef] [PubMed]
  53. Bayramoglu, N.; Kannala, J.; Heikkilä, J. Deep learning for magnification independent breast cancer histopathology image classification. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancún, Mexico, 4–8 December 2016; pp. 2440–2445. [Google Scholar]
  54. Fu, H.; Xu, Y.; Wong, D.W.K.; Liu, J. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields. In Proceedings of the Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on Biomedical Imaging, Prague, Czech, 13–16 April 2016; pp. 698–701. [Google Scholar]
  55. Akkus, Z.; Galimzianova, A.; Hoogi, A.; Rubin, D.L.; Erickson, B.J. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J. Digit. Imaging 2017, 30, 449–459. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Moeskops, P.; Viergever, M.A.; Mendrik, A.M.; de Vries, L.S.; Benders, M.J.; Isgum, I. Automatic Segmentation of MR Brain Images With a Convolutional Neural Network. IEEE Trans. Med. Imaging 2016, 35, 1252–1261. [Google Scholar] [CrossRef] [PubMed]
  57. Ehteshami Bejnordi, B.; Mullooly, M.; Pfeiffer, R.M.; Fan, S.; Vacek, P.M.; Weaver, D.L.; Sherman, M.E. Using deep convolutional neural networks to identify and classify tumor-associated stroma in diagnostic breast biopsies. Mod. Pathol. 2018. [Google Scholar] [CrossRef] [PubMed]
  58. Li, Y.; Li, X.; Xie, X.; Shen, L. Deep learning based gastric cancer identification. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging, Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 182–185. [Google Scholar]
  59. Chmelik, J.; Jakubicek, R.; Walek, P.; Jan, J.; Ourednicek, P.; Lambert, L.; Amadori, E.; Gavelli, G. Deep convolutional neural network-based segmentation and classification of difficult to define metastatic spinal lesions in 3D CT data. Med. Image Anal. 2018. [Google Scholar] [CrossRef] [PubMed]
  60. Wang, X.; Yang, W.; Weinreb, J.; Han, J.; Li, Q.; Kong, X.; Yan, Y.; Ke, Z.; Luo, B.; Liu, T.; et al. Searching for prostate cancer by fully automated magnetic resonance imaging classification: Deep learning versus non-deep learning. Sci. Rep. 2017, 7, 15415. [Google Scholar] [CrossRef] [PubMed]
  61. Causey, J.L.; Zhang, J.; Ma, S.; Jiang, B.; Qualls, J.A.; Politte, D.G.; Prior, F.; Zhang, S.; Huang, X. Highly accurate model for prediction of lung nodule malignancy with CT scans. Sci. Rep. 2018, 8, 9286. [Google Scholar] [CrossRef] [PubMed]
  62. Xu, J.; Xiang, L.; Liu, Q.; Gilmore, H.; Wu, J.; Tang, J.; Madabhushi, A. Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE Trans. Med. Imaging 2016, 35, 119–130. [Google Scholar] [CrossRef] [PubMed]
  63. Faust, O.; Hagiwara, Y.; Hong, T.J.; Lih, O.S.; Acharya, U.R. Deep learning for healthcare applications based on physiological signals: A review. Comput. Methods Programs Biomed. 2018. [Google Scholar] [CrossRef] [PubMed]
  64. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Shen, D.; Wu, G.; Suk, H.I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [PubMed]
  67. Hu, Z.; Tang, J.; Wang, Z.; Zhang, K.; Zhang, L.; Sun, Q. Deep Learning for Image-based Cancer Detection and Diagnosis—A Survey. Pattern Recognit. 2018. [Google Scholar] [CrossRef]
  68. Madu, C.N.; Kuei, C.-H.; Lee, P. Urban sustainability management: A deep learning perspective. Sustain. Cities Soc. 2017, 30, 1–17. [Google Scholar] [CrossRef]
  69. Fadlullah, Z.M.; Tang, F.; Mao, B.; Kato, N.; Akashi, O.; Inoue, T.; Mizutani, K. State-of-the-Art Deep Learning: Evolving Machine Intelligence Toward Tomorrow’s Intelligent Network Traffic Control Systems. IEEE Commun. Surv. Tutor. 2017, 19, 2432–2455. [Google Scholar] [CrossRef]
  70. Yousefi-Azar, M.; Hamey, L. Text summarization using unsupervised deep learning. Expert Syst. Appl. 2017, 68, 93–105. [Google Scholar] [CrossRef]
  71. Zhou, X.; Gong, W.; Fu, W.; Du, F. Application of deep learning in object detection. In Proceedings of the 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS), Wuhan, China, 24–26 May 2017; pp. 631–634. [Google Scholar]
  72. Badjatiya, P.; Gupta, S.; Gupta, M.; Varma, V. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, Perth, Australia, 3–7 April 2017; pp. 759–760. [Google Scholar]
  73. Deng, L. Deep learning: From speech recognition to language and multimodal processing. APSIPA Trans. Signal Inf. Process. 2016, 5. [Google Scholar] [CrossRef]
  74. Singh, S.P.; Kumar, A.; Darbari, H.; Singh, L.; Rastogi, A.; Jain, S. Machine translation using deep learning: An overview. In Proceedings of the 2017 International Conference on Computer, Communications and Electronics (Comptelix), Jaipur, India, 1–2 July 2017; pp. 162–167. [Google Scholar]
Figure 1. Flow diagram of the review process.
Figure 1. Flow diagram of the review process.
Mti 02 00047 g001
Table 1. Literature sources.
Table 1. Literature sources.
Literature SourceISSN
Briefings in Bioinformatics1477-4054
Expert Systems with Application0957-4174
IEEE Transactions Medical Imaging1558-254X
Medical Image Analysis1361-8423
Molecular Pharmaceutics1543-8392
Nature1476-4687
Neural Computing and Applications0941-0643
Neurocomputing0925-2312
Table 2. Results of individual articles in the domain of deep learning and medical diagnosis.
Table 2. Results of individual articles in the domain of deep learning and medical diagnosis.
ReferenceMethodData SourceApplication/Remarks
[17]CNNComputed tomography (CT)Anatomical localization; the results indicate that 3D localization of anatomical regions is possible with 2D images.
[18]CNNMRIAutomated segmentation; liver, heart and great vessels segmentation; it was concluded that this approach has great potential for clinical applications.
[19]CNNMRIBrain tumor grading; a 3-layered CNN has a 18% performance improvement over to the baseline neural network.
[20]CNNFundus imagesGlaucoma detection; the experiments were performed on SCES and ORIGA datasets; further, it was noted that this approach may be great for glaucoma detection.
[21]CNNMRIAlzheimer’s disease prediction; the accuracy of this approach is far superior compared to 2D methods
[22]CNNMammographyAutomatic breast tissue classification; the pectoral muscles were detected with high accuracy (0.83) while nipple detection had lower accuracy (0.56).
[23]CNNECGAutomatic detection of myocardial infarction; average accuracy was 93.53% with noise and 95.22% without noise.
[24]CNNCTAutomated segmentation of human brain structures.
[25]DBN-NNMammographyAutomatic diagnosis for detecting breast cancer; the accuracy of the overall neural network was 99.68%, the sensitivity was 100%, and the specificity was 99.47%.
[26]SDAEUltrasound of breasts, and lung CTBreast lesion and pulmonary nodule detection/diagnosis; the results indicated that there is a significant increase in performance. In addition, it was noted that deep learning techniques have the potential to change CAD systems, with ease, and without the need for structural redesign.
[27]CNNClinical imagesClassification of skin cancer; the results of the study were satisfactory, as the deep convolutional neural networks achieve performance similar to the expertise of 21 board-certified dermatologist. This can lead to mobile dermatology diagnosis, providing millions of people with universal diagnostic care.
[28]UGRNNVarious medical data setsDrug-induced liver injury (DILI) prediction. The model had an accuracy of 86.9%, sensitivity of 82.5%, and specificity of 92.9%. Overall, deep learning gave significantly better results in opposite to other DILI prediction models. In sum, deep learning can lower the health risk for humans when it comes to DILI.
[29]CRNNFundus imagesAutomatic grading system for nuclear cataracts; this method improved the clinical management of this cataract disease, and it has a potential for other eye disease diagnosis.
[30]CNNX-rayChest pathology identification; the area under the curve for heart detection was 0.89, for right pleural effusion detection 0.93, and for the classification between a healthy and an abnormal chest X-ray was 0.79. It was concluded that it is possible for non-medical images, and datasets to be sufficient for recognition of medical images.
[31]SA-SVMDermoscopic imagesSkin cancer diagnosis; the results were promising, and there is a possibility to use this type of self -advised SVM method in cases where is limited labeled data.
[32]CSDCNNMammographyMulti-classification of breast cancer; a great performance of 93.2% accuracy was achieved on large-scale datasets
[33]DNNMRIBrain tumor segmentation; with this method the whole brain can be segmented in 25 s to 3 min, thus making it a great segmentation tool.
[34]CNNMRIBrain lesion segmentation; this approach produced great results.
[35]CNNMammographyBreast density classification; Radiologist have a problem to differentiate between the two types of density. A learning model was developed that helped radiologist in the diagnosis process. Deep learning was found to be useful when it comes to realistic diagnosis, and overall clinical needs.
[36]RBMShear-wave elastography SWEBreast tumor classification; the results indicated that the deep learning model achieved a remarkable accuracy rate of 93.4%, with 88.6% sensitivity, and 97.1% specificity.
[37]DL-CNNCT urographyUrinary bladder segmentation; this method can overcome strong boundaries between two regions that have large differences of gray levels.
[38]U-NetMRIGlioblastoma segmentation; this approach allowed a large U-Net training with small datasets, without significant overfitting. Taken into consideration that patients move during the segmentation process, there may be performance increase switching to 3D convolutions.
[39]CNNCTDisease staging and prognosis of smokers; this type of chronic lung illness prognosis is powerful for risk assessment.
[40]SDAEVarious medical imagesCancer detection from gene data; the study was successful as this method managed to extract genes that are helpful when it comes to cancer prediction.
[41]CNNThe 3D volumetric ultrasound dataUltrasound segmentation of first trimester placenta; it was noted that this approach had similar performance compared to results that were acquired through MRI data.
[42]CNNMRISegmentation of the striatum; two serial CNN architectures were used; the speed and accuracy of this approach makes it adequate for application in neuroscience and other clinical fields.
[43]CNNImages produced with a digital sliderLymph node metastases detection in breast cancer; the best performing algorithm achieved performance comparable to a pathologist.
[44]CNNMRIBrain segmentation; the results are comparable of other state-of-the-art performance.
[45]CAD classifier with deep features from autoencoderCTLung nodule classification; this approach resulted an accuracy of 75.01% and sensitivity of 83.35%; false positive was 0.39 per patient over 10 cross validations.
[46]CNN, DBN, SDAECTLung cancer diagnosis; highest accuracy was achieved with DBN (0.8119).
[47]CNNMRIBreast cancer diagnosis; with this approach the achieved accuracy was 96.39%, the sensitivity was 97.73%, and the specificity was 94.87%.
[48]CNNCTLung nodule detection; the network was trained with weak label information; 3D segmentation could exclude air tracts in the lungs, thus reducing false positives.
[49]DLCNNPlanar projection (PPJ) imageMicrocalcifications detection in digital breast tomosynthesis; the best obtained AUC was 0.933.
[50]CNNCTPancreas segmentation; average dice scores were from 46.6% to 68.8%.
[51]CNNFundus imagesDiagnosis of diabetic retinopathy; on a dataset of 80,000 images the accuracy was 75% and the sensitivity is 95%.
[52]U-NetMRIBreast and fibroglandular tissue segmentation; average Dice Similarity Coefficients (DSC) were 0.850 for 3C U-net, 0.811 for 2C U-net, and 0.671 for atlas-based method.
[53]CNNHistopathology imagesBreast cancer histopathology classification; average recognition rate was 82.13% for classification tasks, and 80.10% accuracy when it comes to magnification estimation.
[54]CNNFundus imagesRetina vessel segmentation; this method reduces the number of false-positives.
[55]CNNMRIBrain segmentation; the performance is dependent of several factors such as initialization, preprocessing, and post-processing.
[56]CNNMRIAutomatic brain segmentation; this approach can be used for accurate brain segmentation results.
[57]CNNDigital imagesIdentifying and classifying tumor-associated stroma from breast biopsies; the study revealed that this deep learning approach was able to define stromal features of ductal carcinoma in situ grade.
[58]CNNGastric imagesGastric cancer identification; the accuracy of classification was 97.93%.
[59]CNNCTLytic and sclerotic metastatic spinal lesion detection, segmentation and classification; the obtained results were quantitatively compared to other methods and it was concluded that this approach can provide better accuracy even for small lesions (greater than 1.4 mm3 and diameter greater than 0.7 mm).
[60]DCNNMRIIn this study the predictive test was conducted both with deep learning and non-deep-learning. The deep learning approach had an accuracy of 84%, sensitivity of 69.6% and specificity of 83.9%. The non-deep-learning approach had 70% accuracy, 49.4% sensitivity, and 81.7% specificity.
[61]CNNCTLung cancer nodule malignancy classification; with this approach a high level of accuracy was achieved 99%. This is proportionate to the accuracy of an experienced radiologist.
[62]SSAEDigital pathology imagesNuclei detection of breast cancer; the stacked sparse autoencoder (SSAE) approach can outperform state of the art nuclear detection strategies.
Table 3. Synthesis of articles by type of deep learning method, data source and application.
Table 3. Synthesis of articles by type of deep learning method, data source and application.
Type of Deep Learning MethodNumber of Articles
CNN32
RBM1
SA-SVM1
CRNN1
Other3
DBN1
SDAE2
UGRNN1
Multiple1
U-Net2
CSDCNN1
Type of Data SourceNumber of Articles
X-ray1
Ultrasound2
CT10
MRI13
Fundus photography4
Mammography4
Other data12
Application TypeNumber of Articles
Localization1
Segmentation14
Grading2
Detection8
Prediction4
Classification8
Diagnosis6
Identification2

Share and Cite

MDPI and ACS Style

Bakator, M.; Radosav, D. Deep Learning and Medical Diagnosis: A Review of Literature. Multimodal Technol. Interact. 2018, 2, 47. https://doi.org/10.3390/mti2030047

AMA Style

Bakator M, Radosav D. Deep Learning and Medical Diagnosis: A Review of Literature. Multimodal Technologies and Interaction. 2018; 2(3):47. https://doi.org/10.3390/mti2030047

Chicago/Turabian Style

Bakator, Mihalj, and Dragica Radosav. 2018. "Deep Learning and Medical Diagnosis: A Review of Literature" Multimodal Technologies and Interaction 2, no. 3: 47. https://doi.org/10.3390/mti2030047

Article Metrics

Back to TopTop