Recent Advances in Deep Learning and Medical Imaging for Cancer Treatment

A special issue of Cancers (ISSN 2072-6694).

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 80786

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
Interests: data science; machine learning; data structures and algorithms; systems engineering; neural networks; data mining; project management; tensor flow; predictive modelling; artificial intelligence; hadoop; apache spark; software development; empirical researchbig data
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Applied Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland
Interests: computational intellgence; neural networks; image processing; expert systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep learning is a machine learning method that allows computational models composed of multiple processing layers to be fed with raw data and automatically learn various abstract data representations for detection and classification. New deep learning methods and applications are demanded with advances in medical imaging. Due to considerable variation and complexity, it is necessary to learn representations of clinical knowledge from big imaging data for a better understanding of health informatics. However, numerous challenges include diverse and inhomogeneous inputs, high dimensional features versus inadequate subjects, subtle key patterns hidden by sizeable individual variation, and sometimes an unknown mechanism underlying the disease. The challenges and opportunities in the field have inspired an increasing number of experts to devote their research to machine learning in medical imaging today.

The sudden increase in demand on the market for imaging science enhances the opportunities to develop new imaging techniques for diagnosis and therapy. New clinical indications using proteomic or genomic expression in oncology, cardiology, and neurology also offer a stimulus. This promotes the growth of procedure volume and sales of clinic imaging agents and the development of new radiopharmaceuticals. This Special Issue will bring together researchers from diverse fields and specializations, such as healthcare engineering, bioinformatics, medical doctors, computer engineering, computer science, information technology, and mathematics.

Potential topics include but are not limited to:

  • Recent advances in bioimaging applications in preclinical drug discovery;
  • Computer-aided detection and diagnosis;
  • Image analysis of anatomical structures/functions and lesions;
  • Computer-aided detection/diagnosis;
  • Multimodality fusion for analysis, diagnosis, and intervention;
  • Deep learning for medical applications;
  • Automated medical diagnostics;
  • Advances in imaging instrumentation development;
  • Hybrid imaging modalities in disease management;
  • Medical image reconstruction;
  • Medical image retrieval;
  • Molecular/pathologic/cellular image analysis;
  • Dynamic, functional, and physiologic imaging.

Dr. Muhammad Fazal Ijaz
Prof. Dr. Marcin Woźniak
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Cancers is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2900 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • machine learning
  • medical imaging
  • bioimaging
  • computer-aided diagnosis
  • oncology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

5 pages, 176 KiB  
Editorial
Editorial: Recent Advances in Deep Learning and Medical Imaging for Cancer Treatment
by Muhammad Fazal Ijaz and Marcin Woźniak
Cancers 2024, 16(4), 700; https://doi.org/10.3390/cancers16040700 - 7 Feb 2024
Cited by 3 | Viewed by 3007
Abstract
In the evolving landscape of medical imaging, the escalating need for deep-learning
methods takes center stage, offering the capability to autonomously acquire abstract data
representations crucial for early detection and classification for cancer treatment. The
complexities in handling diverse inputs, high-dimensional features, and [...] Read more.
In the evolving landscape of medical imaging, the escalating need for deep-learning
methods takes center stage, offering the capability to autonomously acquire abstract data
representations crucial for early detection and classification for cancer treatment. The
complexities in handling diverse inputs, high-dimensional features, and subtle patterns
within imaging data are acknowledged as significant challenges in this technological
pursuit. This Special Issue, “Recent Advances in Deep Learning and Medical Imaging
for Cancer Treatment”, has attracted 19 high-quality articles that cover state-of-the-art
applications and technical developments of deep learning, medical imaging, automatic
detection, and classification, explainable artificial intelligence-enabled diagnosis for cancer
treatment. In the ever-evolving landscape of cancer treatment, five pivotal themes have
emerged as beacons of transformative change. This editorial delves into the realms of
innovation that are shaping the future of cancer treatment, focusing on five interconnected
themes: use of artificial intelligence in medical imaging, applications of AI in cancer
diagnosis and treatment, addressing challenges in medical image analysis, advancements
in cancer detection techniques, and innovations in skin cancer classification. Full article

Research

Jump to: Editorial, Review

13 pages, 3885 KiB  
Article
Deep Learning for Automatic Diagnosis and Morphologic Characterization of Malignant Biliary Strictures Using Digital Cholangioscopy: A Multicentric Study
by Miguel Mascarenhas Saraiva, Tiago Ribeiro, Mariano González-Haba, Belén Agudo Castillo, João P. S. Ferreira, Filipe Vilas Boas, João Afonso, Francisco Mendes, Miguel Martins, Pedro Cardoso, Pedro Pereira and Guilherme Macedo
Cancers 2023, 15(19), 4827; https://doi.org/10.3390/cancers15194827 - 1 Oct 2023
Cited by 3 | Viewed by 1633
Abstract
Digital single-operator cholangioscopy (D-SOC) has enhanced the ability to diagnose indeterminate biliary strictures (BSs). Pilot studies using artificial intelligence (AI) models in D-SOC demonstrated promising results. Our group aimed to develop a convolutional neural network (CNN) for the identification and morphological characterization of [...] Read more.
Digital single-operator cholangioscopy (D-SOC) has enhanced the ability to diagnose indeterminate biliary strictures (BSs). Pilot studies using artificial intelligence (AI) models in D-SOC demonstrated promising results. Our group aimed to develop a convolutional neural network (CNN) for the identification and morphological characterization of malignant BSs in D-SOC. A total of 84,994 images from 129 D-SOC exams in two centers (Portugal and Spain) were used for developing the CNN. Each image was categorized as either a normal/benign finding or as malignant lesion (the latter dependent on histopathological results). Additionally, the CNN was evaluated for the detection of morphologic features, including tumor vessels and papillary projections. The complete dataset was divided into training and validation datasets. The model was evaluated through its sensitivity, specificity, positive and negative predictive values, accuracy and area under the receiver-operating characteristic and precision-recall curves (AUROC and AUPRC, respectively). The model achieved a 82.9% overall accuracy, 83.5% sensitivity and 82.4% specificity, with an AUROC and AUPRC of 0.92 and 0.93, respectively. The developed CNN successfully distinguished benign findings from malignant BSs. The development and application of AI tools to D-SOC has the potential to significantly augment the diagnostic yield of this exam for identifying malignant strictures. Full article
Show Figures

Figure 1

36 pages, 5863 KiB  
Article
Deep Transfer Learning with Enhanced Feature Fusion for Detection of Abnormalities in X-ray Images
by Zaenab Alammar, Laith Alzubaidi, Jinglan Zhang, Yuefeng Li, Waail Lafta and Yuantong Gu
Cancers 2023, 15(15), 4007; https://doi.org/10.3390/cancers15154007 - 7 Aug 2023
Cited by 17 | Viewed by 2399
Abstract
Medical image classification poses significant challenges in real-world scenarios. One major obstacle is the scarcity of labelled training data, which hampers the performance of image-classification algorithms and generalisation. Gathering sufficient labelled data is often difficult and time-consuming in the medical domain, but deep [...] Read more.
Medical image classification poses significant challenges in real-world scenarios. One major obstacle is the scarcity of labelled training data, which hampers the performance of image-classification algorithms and generalisation. Gathering sufficient labelled data is often difficult and time-consuming in the medical domain, but deep learning (DL) has shown remarkable performance, although it typically requires a large amount of labelled data to achieve optimal results. Transfer learning (TL) has played a pivotal role in reducing the time, cost, and need for a large number of labelled images. This paper presents a novel TL approach that aims to overcome the limitations and disadvantages of TL that are characteristic of an ImageNet dataset, which belongs to a different domain. Our proposed TL approach involves training DL models on numerous medical images that are similar to the target dataset. These models were then fine-tuned using a small set of annotated medical images to leverage the knowledge gained from the pre-training phase. We specifically focused on medical X-ray imaging scenarios that involve the humerus and wrist from the musculoskeletal radiographs (MURA) dataset. Both of these tasks face significant challenges regarding accurate classification. The models trained with the proposed TL were used to extract features and were subsequently fused to train several machine learning (ML) classifiers. We combined these diverse features to represent various relevant characteristics in a comprehensive way. Through extensive evaluation, our proposed TL and feature-fusion approach using ML classifiers achieved remarkable results. For the classification of the humerus, we achieved an accuracy of 87.85%, an F1-score of 87.63%, and a Cohen’s Kappa coefficient of 75.69%. For wrist classification, our approach achieved an accuracy of 85.58%, an F1-score of 82.70%, and a Cohen’s Kappa coefficient of 70.46%. The results demonstrated that the models trained using our proposed TL approach outperformed those trained with ImageNet TL. We employed visualisation techniques to further validate these findings, including a gradient-based class activation heat map (Grad-CAM) and locally interpretable model-independent explanations (LIME). These visualisation tools provided additional evidence to support the superior accuracy of models trained with our proposed TL approach compared to those trained with ImageNet TL. Furthermore, our proposed TL approach exhibited greater robustness in various experiments compared to ImageNet TL. Importantly, the proposed TL approach and the feature-fusion technique are not limited to specific tasks. They can be applied to various medical image applications, thus extending their utility and potential impact. To demonstrate the concept of reusability, a computed tomography (CT) case was adopted. The results obtained from the proposed method showed improvements. Full article
Show Figures

Figure 1

22 pages, 11694 KiB  
Article
Explainable CAD System for Classification of Acute Lymphoblastic Leukemia Based on a Robust White Blood Cell Segmentation
by Jose Luis Diaz Resendiz, Volodymyr Ponomaryov, Rogelio Reyes Reyes and Sergiy Sadovnychiy
Cancers 2023, 15(13), 3376; https://doi.org/10.3390/cancers15133376 - 27 Jun 2023
Cited by 8 | Viewed by 1929
Abstract
Leukemia is a significant health challenge, with high incidence and mortality rates. Computer-aided diagnosis (CAD) has emerged as a promising approach. However, deep-learning methods suffer from the “black box problem”, leading to unreliable diagnoses. This research proposes an Explainable AI (XAI) Leukemia classification [...] Read more.
Leukemia is a significant health challenge, with high incidence and mortality rates. Computer-aided diagnosis (CAD) has emerged as a promising approach. However, deep-learning methods suffer from the “black box problem”, leading to unreliable diagnoses. This research proposes an Explainable AI (XAI) Leukemia classification method that addresses this issue by incorporating a robust White Blood Cell (WBC) nuclei segmentation as a hard attention mechanism. The segmentation of WBC is achieved by combining image processing and U-Net techniques, resulting in improved overall performance. The segmented images are fed into modified ResNet-50 models, where the MLP classifier, activation functions, and training scheme have been tested for leukemia subtype classification. Additionally, we add visual explainability and feature space analysis techniques to offer an interpretable classification. Our segmentation algorithm achieves an Intersection over Union (IoU) of 0.91, in six databases. Furthermore, the deep-learning classifier achieves an accuracy of 99.9% on testing. The Grad CAM methods and clustering space analysis confirm improved network focus when classifying segmented images compared to non-segmented images. Overall, the proposed visual explainable CAD system has the potential to assist physicians in diagnosing leukemia and improving patient outcomes. Full article
Show Figures

Figure 1

15 pages, 8015 KiB  
Article
Performing Automatic Identification and Staging of Urothelial Carcinoma in Bladder Cancer Patients Using a Hybrid Deep-Machine Learning Approach
by Suryadipto Sarkar, Kong Min, Waleed Ikram, Ryan W. Tatton, Irbaz B. Riaz, Alvin C. Silva, Alan H. Bryce, Cassandra Moore, Thai H. Ho, Guru Sonpavde, Haidar M. Abdul-Muhsin, Parminder Singh and Teresa Wu
Cancers 2023, 15(6), 1673; https://doi.org/10.3390/cancers15061673 - 8 Mar 2023
Cited by 16 | Viewed by 2973
Abstract
Accurate clinical staging of bladder cancer aids in optimizing the process of clinical decision-making, thereby tailoring the effective treatment and management of patients. While several radiomics approaches have been developed to facilitate the process of clinical diagnosis and staging of bladder cancer using [...] Read more.
Accurate clinical staging of bladder cancer aids in optimizing the process of clinical decision-making, thereby tailoring the effective treatment and management of patients. While several radiomics approaches have been developed to facilitate the process of clinical diagnosis and staging of bladder cancer using grayscale computed tomography (CT) scans, the performances of these models have been low, with little validation and no clear consensus on specific imaging signatures. We propose a hybrid framework comprising pre-trained deep neural networks for feature extraction, in combination with statistical machine learning techniques for classification, which is capable of performing the following classification tasks: (1) bladder cancer tissue vs. normal tissue, (2) muscle-invasive bladder cancer (MIBC) vs. non-muscle-invasive bladder cancer (NMIBC), and (3) post-treatment changes (PTC) vs. MIBC. Full article
Show Figures

Figure 1

20 pages, 6705 KiB  
Article
Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging
by Salem Alkhalaf, Fahad Alturise, Adel Aboud Bahaddad, Bushra M. Elamin Elnaim, Samah Shabana, Sayed Abdel-Khalek and Romany F. Mansour
Cancers 2023, 15(5), 1492; https://doi.org/10.3390/cancers15051492 - 27 Feb 2023
Cited by 12 | Viewed by 2489
Abstract
Explainable Artificial Intelligence (XAI) is a branch of AI that mainly focuses on developing systems that provide understandable and clear explanations for their decisions. In the context of cancer diagnoses on medical imaging, an XAI technology uses advanced image analysis methods like deep [...] Read more.
Explainable Artificial Intelligence (XAI) is a branch of AI that mainly focuses on developing systems that provide understandable and clear explanations for their decisions. In the context of cancer diagnoses on medical imaging, an XAI technology uses advanced image analysis methods like deep learning (DL) to make a diagnosis and analyze medical images, as well as provide a clear explanation for how it arrived at its diagnoses. This includes highlighting specific areas of the image that the system recognized as indicative of cancer while also providing data on the fundamental AI algorithm and decision-making process used. The objective of XAI is to provide patients and doctors with a better understanding of the system’s decision-making process and to increase transparency and trust in the diagnosis method. Therefore, this study develops an Adaptive Aquila Optimizer with Explainable Artificial Intelligence Enabled Cancer Diagnosis (AAOXAI-CD) technique on Medical Imaging. The proposed AAOXAI-CD technique intends to accomplish the effectual colorectal and osteosarcoma cancer classification process. To achieve this, the AAOXAI-CD technique initially employs the Faster SqueezeNet model for feature vector generation. As well, the hyperparameter tuning of the Faster SqueezeNet model takes place with the use of the AAO algorithm. For cancer classification, the majority weighted voting ensemble model with three DL classifiers, namely recurrent neural network (RNN), gated recurrent unit (GRU), and bidirectional long short-term memory (BiLSTM). Furthermore, the AAOXAI-CD technique combines the XAI approach LIME for better understanding and explainability of the black-box method for accurate cancer detection. The simulation evaluation of the AAOXAI-CD methodology can be tested on medical cancer imaging databases, and the outcomes ensured the auspicious outcome of the AAOXAI-CD methodology than other current approaches. Full article
Show Figures

Figure 1

12 pages, 3733 KiB  
Article
Interpretable and Reliable Oral Cancer Classifier with Attention Mechanism and Expert Knowledge Embedding via Attention Map
by Bofan Song, Chicheng Zhang, Sumsum Sunny, Dharma Raj KC, Shaobai Li, Keerthi Gurushanth, Pramila Mendonca, Nirza Mukhia, Sanjana Patrick, Shubha Gurudath, Subhashini Raghavan, Imchen Tsusennaro, Shirley T. Leivon, Trupti Kolur, Vivek Shetty, Vidya Bushan, Rohan Ramesh, Vijay Pillai, Petra Wilder-Smith, Amritha Suresh, Moni Abraham Kuriakose, Praveen Birur and Rongguang Liangadd Show full author list remove Hide full author list
Cancers 2023, 15(5), 1421; https://doi.org/10.3390/cancers15051421 - 23 Feb 2023
Cited by 2 | Viewed by 1900
Abstract
Convolutional neural networks have demonstrated excellent performance in oral cancer detection and classification. However, the end-to-end learning strategy makes CNNs hard to interpret, and it can be challenging to fully understand the decision-making procedure. Additionally, reliability is also a significant challenge for CNN [...] Read more.
Convolutional neural networks have demonstrated excellent performance in oral cancer detection and classification. However, the end-to-end learning strategy makes CNNs hard to interpret, and it can be challenging to fully understand the decision-making procedure. Additionally, reliability is also a significant challenge for CNN based approaches. In this study, we proposed a neural network called the attention branch network (ABN), which combines the visual explanation and attention mechanisms to improve the recognition performance and interpret the decision-making simultaneously. We also embedded expert knowledge into the network by having human experts manually edit the attention maps for the attention mechanism. Our experiments have shown that ABN performs better than the original baseline network. By introducing the Squeeze-and-Excitation (SE) blocks to the network, the cross-validation accuracy increased further. Furthermore, we observed that some previously misclassified cases were correctly recognized after updating by manually editing the attention maps. The cross-validation accuracy increased from 0.846 to 0.875 with the ABN (Resnet18 as baseline), 0.877 with SE-ABN, and 0.903 after embedding expert knowledge. The proposed method provides an accurate, interpretable, and reliable oral cancer computer-aided diagnosis system through visual explanation, attention mechanisms, and expert knowledge embedding. Full article
Show Figures

Figure 1

19 pages, 5862 KiB  
Article
Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis
by Marwa Obayya, Mashael S. Maashi, Nadhem Nemri, Heba Mohsen, Abdelwahed Motwakel, Azza Elneil Osman, Amani A. Alneil and Mohamed Ibrahim Alsaid
Cancers 2023, 15(3), 885; https://doi.org/10.3390/cancers15030885 - 31 Jan 2023
Cited by 37 | Viewed by 2871
Abstract
Histopathological images are commonly used imaging modalities for breast cancer. As manual analysis of histopathological images is difficult, automated tools utilizing artificial intelligence (AI) and deep learning (DL) methods should be modelled. The recent advancements in DL approaches will be helpful in establishing [...] Read more.
Histopathological images are commonly used imaging modalities for breast cancer. As manual analysis of histopathological images is difficult, automated tools utilizing artificial intelligence (AI) and deep learning (DL) methods should be modelled. The recent advancements in DL approaches will be helpful in establishing maximal image classification performance in numerous application zones. This study develops an arithmetic optimization algorithm with deep-learning-based histopathological breast cancer classification (AOADL-HBCC) technique for healthcare decision making. The AOADL-HBCC technique employs noise removal based on median filtering (MF) and a contrast enhancement process. In addition, the presented AOADL-HBCC technique applies an AOA with a SqueezeNet model to derive feature vectors. Finally, a deep belief network (DBN) classifier with an Adamax hyperparameter optimizer is applied for the breast cancer classification process. In order to exhibit the enhanced breast cancer classification results of the AOADL-HBCC methodology, this comparative study states that the AOADL-HBCC technique displays better performance than other recent methodologies, with a maximum accuracy of 96.77%. Full article
Show Figures

Figure 1

14 pages, 1060 KiB  
Article
A Series-Based Deep Learning Approach to Lung Nodule Image Classification
by Mehmet Ali Balcı, Larissa M. Batrancea, Ömer Akgüller and Anca Nichita
Cancers 2023, 15(3), 843; https://doi.org/10.3390/cancers15030843 - 30 Jan 2023
Cited by 13 | Viewed by 2909
Abstract
Although many studies have shown that deep learning approaches yield better results than traditional methods based on manual features, CADs methods still have several limitations. These are due to the diversity in imaging modalities and clinical pathologies. This diversity creates difficulties because of [...] Read more.
Although many studies have shown that deep learning approaches yield better results than traditional methods based on manual features, CADs methods still have several limitations. These are due to the diversity in imaging modalities and clinical pathologies. This diversity creates difficulties because of variation and similarities between classes. In this context, the new approach from our study is a hybrid method that performs classifications using both medical image analysis and radial scanning series features. Hence, the areas of interest obtained from images are subjected to a radial scan, with their centers as poles, in order to obtain series. A U-shape convolutional neural network model is then used for the 4D data classification problem. We therefore present a novel approach to the classification of 4D data obtained from lung nodule images. With radial scanning, the eigenvalue of nodule images is captured, and a powerful classification is performed. According to our results, an accuracy of 92.84% was obtained and much more efficient classification scores resulted as compared to recent classifiers. Full article
Show Figures

Figure 1

19 pages, 3220 KiB  
Article
APESTNet with Mask R-CNN for Liver Tumor Segmentation and Classification
by Prabhu Kavin Balasubramanian, Wen-Cheng Lai, Gan Hong Seng, Kavitha C and Jeeva Selvaraj
Cancers 2023, 15(2), 330; https://doi.org/10.3390/cancers15020330 - 4 Jan 2023
Cited by 20 | Viewed by 4662
Abstract
Diagnosis and treatment of hepatocellular carcinoma or metastases rely heavily on accurate segmentation and classification of liver tumours. However, due to the liver tumor’s hazy borders and wide range of possible shapes, sizes, and positions, accurate and automatic tumour segmentation and classification remains [...] Read more.
Diagnosis and treatment of hepatocellular carcinoma or metastases rely heavily on accurate segmentation and classification of liver tumours. However, due to the liver tumor’s hazy borders and wide range of possible shapes, sizes, and positions, accurate and automatic tumour segmentation and classification remains a difficult challenge. With the advancement of computing, new models in artificial intelligence have evolved. Following its success in Natural language processing (NLP), the transformer paradigm has been adopted by the computer vision (CV) community of the NLP. While there are already accepted approaches to classifying the liver, especially in clinical settings, there is room for advancement in terms of their precision. This paper makes an effort to apply a novel model for segmenting and classifying liver tumours built on deep learning. In order to accomplish this, the created model follows a three-stage procedure consisting of (a) pre-processing, (b) liver segmentation, and (c) classification. In the first phase, the collected Computed Tomography (CT) images undergo three stages of pre-processing, including contrast improvement via histogram equalization and noise reduction via the median filter. Next, an enhanced mask region-based convolutional neural networks (Mask R-CNN) model is used to separate the liver from the CT abdominal image. To prevent overfitting, the segmented picture is fed onto an Enhanced Swin Transformer Network with Adversarial Propagation (APESTNet). The experimental results prove the superior performance of the proposed perfect on a wide variety of CT images, as well as its efficiency and low sensitivity to noise. Full article
Show Figures

Figure 1

16 pages, 2476 KiB  
Article
An Explainable AI-Enabled Framework for Interpreting Pulmonary Diseases from Chest Radiographs
by Zubaira Naz, Muhammad Usman Ghani Khan, Tanzila Saba, Amjad Rehman, Haitham Nobanee and Saeed Ali Bahaj
Cancers 2023, 15(1), 314; https://doi.org/10.3390/cancers15010314 - 3 Jan 2023
Cited by 19 | Viewed by 4401
Abstract
Explainable Artificial Intelligence is a key component of artificially intelligent systems that aim to explain the classification results. The classification results explanation is essential for automatic disease diagnosis in healthcare. The human respiration system is badly affected by different chest pulmonary diseases. Automatic [...] Read more.
Explainable Artificial Intelligence is a key component of artificially intelligent systems that aim to explain the classification results. The classification results explanation is essential for automatic disease diagnosis in healthcare. The human respiration system is badly affected by different chest pulmonary diseases. Automatic classification and explanation can be used to detect these lung diseases. In this paper, we introduced a CNN-based transfer learning-based approach for automatically explaining pulmonary diseases, i.e., edema, tuberculosis, nodules, and pneumonia from chest radiographs. Among these pulmonary diseases, pneumonia, which COVID-19 causes, is deadly; therefore, radiographs of COVID-19 are used for the explanation task. We used the ResNet50 neural network and trained the network on extensive training with the COVID-CT dataset and the COVIDNet dataset. The interpretable model LIME is used for the explanation of classification results. Lime highlights the input image’s important features for generating the classification result. We evaluated the explanation using radiologists’ highlighted images and identified that our model highlights and explains the same regions. We achieved improved classification results with our fine-tuned model with an accuracy of 93% and 97%, respectively. The analysis of our results indicates that this research not only improves the classification results but also provides an explanation of pulmonary diseases with advanced deep-learning methods. This research would assist radiologists with automatic disease detection and explanations, which are used to make clinical decisions and assist in diagnosing and treating pulmonary diseases in the early stage. Full article
Show Figures

Figure 1

26 pages, 2904 KiB  
Article
Attention Cost-Sensitive Deep Learning-Based Approach for Skin Cancer Detection and Classification
by Vinayakumar Ravi
Cancers 2022, 14(23), 5872; https://doi.org/10.3390/cancers14235872 - 29 Nov 2022
Cited by 12 | Viewed by 5133
Abstract
Deep learning-based models have been employed for the detection and classification of skin diseases through medical imaging. However, deep learning-based models are not effective for rare skin disease detection and classification. This is mainly due to the reason that rare skin disease has [...] Read more.
Deep learning-based models have been employed for the detection and classification of skin diseases through medical imaging. However, deep learning-based models are not effective for rare skin disease detection and classification. This is mainly due to the reason that rare skin disease has very a smaller number of data samples. Thus, the dataset will be highly imbalanced, and due to the bias in learning, most of the models give better performances. The deep learning models are not effective in detecting the affected tiny portions of skin disease in the overall regions of the image. This paper presents an attention-cost-sensitive deep learning-based feature fusion ensemble meta-classifier approach for skin cancer detection and classification. Cost weights are included in the deep learning models to handle the data imbalance during training. To effectively learn the optimal features from the affected tiny portions of skin image samples, attention is integrated into the deep learning models. The features from the finetuned models are extracted and the dimensionality of the features was further reduced by using a kernel-based principal component (KPCA) analysis. The reduced features of the deep learning-based finetuned models are fused and passed into ensemble meta-classifiers for skin disease detection and classification. The ensemble meta-classifier is a two-stage model. The first stage performs the prediction of skin disease and the second stage performs the classification by considering the prediction of the first stage as features. Detailed analysis of the proposed approach is demonstrated for both skin disease detection and skin disease classification. The proposed approach demonstrated an accuracy of 99% on skin disease detection and 99% on skin disease classification. In all the experimental settings, the proposed approach outperformed the existing methods and demonstrated a performance improvement of 4% accuracy for skin disease detection and 9% accuracy for skin disease classification. The proposed approach can be used as a computer-aided diagnosis (CAD) tool for the early diagnosis of skin cancer detection and classification in healthcare and medical environments. The tool can accurately detect skin diseases and classify the skin disease into their skin disease family. Full article
Show Figures

Figure 1

25 pages, 39567 KiB  
Article
Integrated Design of Optimized Weighted Deep Feature Fusion Strategies for Skin Lesion Image Classification
by Niharika Mohanty, Manaswini Pradhan, Annapareddy V. N. Reddy, Sachin Kumar and Ahmed Alkhayyat
Cancers 2022, 14(22), 5716; https://doi.org/10.3390/cancers14225716 - 21 Nov 2022
Cited by 7 | Viewed by 5278
Abstract
This study mainly focuses on pre-processing the HAM10000 and BCN20000 skin lesion datasets to select important features that will drive for proper skin cancer classification. In this work, three feature fusion strategies have been proposed by utilizing three pre-trained Convolutional Neural Network (CNN) [...] Read more.
This study mainly focuses on pre-processing the HAM10000 and BCN20000 skin lesion datasets to select important features that will drive for proper skin cancer classification. In this work, three feature fusion strategies have been proposed by utilizing three pre-trained Convolutional Neural Network (CNN) models, namely VGG16, EfficientNet B0, and ResNet50 to select the important features based on the weights of the features and are coined as Adaptive Weighted Feature Set (AWFS). Then, two other strategies, Model-based Optimized Weighted Feature Set (MOWFS) and Feature-based Optimized Weighted Feature Set (FOWFS), are proposed by optimally and adaptively choosing the weights using a meta-heuristic artificial jellyfish (AJS) algorithm. The MOWFS-AJS is a model-specific approach whereas the FOWFS-AJS is a feature-specific approach for optimizing the weights chosen for obtaining optimal feature sets. The performances of those three proposed feature selection strategies are evaluated using Decision Tree (DT), Naïve Bayesian (NB), Multi-Layer Perceptron (MLP), and Support Vector Machine (SVM) classifiers and the performance are measured through accuracy, precision, sensitivity, and F1-score. Additionally, the area under the receiver operating characteristics curves (AUC-ROC) is plotted and it is observed that FOWFS-AJS shows the best accuracy performance based on the SVM with 94.05% and 94.90%, respectively, for HAM 10000 and BCN 20000 datasets. Finally, the experimental results are also analyzed using a non-parametric Friedman statistical test and the computational times are recorded; the results show that, out of those three proposed feature selection strategies, the FOWFS-AJS performs very well because its quick converging nature is inculcated with the help of AJS. Full article
Show Figures

Figure 1

12 pages, 2006 KiB  
Article
A Deep Learning-Aided Automated Method for Calculating Metabolic Tumor Volume in Diffuse Large B-Cell Lymphoma
by Russ A. Kuker, David Lehmkuhl, Deukwoo Kwon, Weizhao Zhao, Izidore S. Lossos, Craig H. Moskowitz, Juan Pablo Alderuccio and Fei Yang
Cancers 2022, 14(21), 5221; https://doi.org/10.3390/cancers14215221 - 25 Oct 2022
Cited by 7 | Viewed by 7912
Abstract
Metabolic tumor volume (MTV) is a robust prognostic biomarker in diffuse large B-cell lymphoma (DLBCL). The available semiautomatic software for calculating MTV requires manual input limiting its routine application in clinical research. Our objective was to develop a fully automated method (AM) for [...] Read more.
Metabolic tumor volume (MTV) is a robust prognostic biomarker in diffuse large B-cell lymphoma (DLBCL). The available semiautomatic software for calculating MTV requires manual input limiting its routine application in clinical research. Our objective was to develop a fully automated method (AM) for calculating MTV and to validate the method by comparing its results with those from two nuclear medicine (NM) readers. The automated method designed for this study employed a deep convolutional neural network to segment normal physiologic structures from the computed tomography (CT) scans that demonstrate intense avidity on positron emission tomography (PET) scans. The study cohort consisted of 100 patients with newly diagnosed DLBCL who were randomly selected from the Alliance/CALGB 50,303 (NCT00118209) trial. We observed high concordance in MTV calculations between the AM and readers with Pearson’s correlation coefficients and interclass correlations comparing reader 1 to AM of 0.9814 (p < 0.0001) and 0.98 (p < 0.001; 95%CI = 0.96 to 0.99), respectively; and comparing reader 2 to AM of 0.9818 (p < 0.0001) and 0.98 (p < 0.0001; 95%CI = 0.96 to 0.99), respectively. The Bland–Altman plots showed only relatively small systematic errors between the proposed method and readers for both MTV and maximum standardized uptake value (SUVmax). This approach may possess the potential to integrate PET-based biomarkers in clinical trials. Full article
Show Figures

Figure 1

22 pages, 2976 KiB  
Article
Cancerous Tumor Controlled Treatment Using Search Heuristic (GA)-Based Sliding Mode and Synergetic Controller
by Fazal Subhan, Muhammad Adnan Aziz, Inam Ullah Khan, Muhammad Fayaz, Marcin Wozniak, Jana Shafi and Muhammad Fazal Ijaz
Cancers 2022, 14(17), 4191; https://doi.org/10.3390/cancers14174191 - 29 Aug 2022
Cited by 12 | Viewed by 2725
Abstract
Cancerous tumor cells divide uncontrollably, which results in either tumor or harm to the immune system of the body. Due to the destructive effects of chemotherapy, optimal medications are needed. Therefore, possible treatment methods should be controlled to maintain the constant/continuous dose for [...] Read more.
Cancerous tumor cells divide uncontrollably, which results in either tumor or harm to the immune system of the body. Due to the destructive effects of chemotherapy, optimal medications are needed. Therefore, possible treatment methods should be controlled to maintain the constant/continuous dose for affecting the spreading of cancerous tumor cells. Rapid growth of cells is classified into primary and secondary types. In giving a proper response, the immune system plays an important role. This is considered a natural process while fighting against tumors. In recent days, achieving a better method to treat tumors is the prime focus of researchers. Mathematical modeling of tumors uses combined immune, vaccine, and chemotherapies to check performance stability. In this research paper, mathematical modeling is utilized with reference to cancerous tumor growth, the immune system, and normal cells, which are directly affected by the process of chemotherapy. This paper presents novel techniques, which include Bernstein polynomial (BSP) with genetic algorithm (GA), sliding mode controller (SMC), and synergetic control (SC), for giving a possible solution to the cancerous tumor cells (CCs) model. Through GA, random population is generated to evaluate fitness. SMC is used for the continuous exponential dose of chemotherapy to reduce CCs in about forty-five days. In addition, error function consists of five cases that include normal cells (NCs), immune cells (ICs), CCs, and chemotherapy. Furthermore, the drug control process is explained in all the cases. In simulation results, utilizing SC has completely eliminated CCs in nearly five days. The proposed approach reduces CCs as early as possible. Full article
Show Figures

Figure 1

18 pages, 28517 KiB  
Article
Chaotic Sparrow Search Algorithm with Deep Transfer Learning Enabled Breast Cancer Classification on Histopathological Images
by K. Shankar, Ashit Kumar Dutta, Sachin Kumar, Gyanendra Prasad Joshi and Ill Chul Doo
Cancers 2022, 14(11), 2770; https://doi.org/10.3390/cancers14112770 - 2 Jun 2022
Cited by 16 | Viewed by 2510
Abstract
Breast cancer is the major cause behind the death of women worldwide and is responsible for several deaths each year. Even though there are several means to identify breast cancer, histopathological diagnosis is now considered the gold standard in the diagnosis of cancer. [...] Read more.
Breast cancer is the major cause behind the death of women worldwide and is responsible for several deaths each year. Even though there are several means to identify breast cancer, histopathological diagnosis is now considered the gold standard in the diagnosis of cancer. However, the difficulty of histopathological image and the rapid rise in workload render this process time-consuming, and the outcomes might be subjected to pathologists’ subjectivity. Hence, the development of a precise and automatic histopathological image analysis method is essential for the field. Recently, the deep learning method for breast cancer pathological image classification has made significant progress, which has become mainstream in this field. This study introduces a novel chaotic sparrow search algorithm with a deep transfer learning-enabled breast cancer classification (CSSADTL-BCC) model on histopathological images. The presented CSSADTL-BCC model mainly focused on the recognition and classification of breast cancer. To accomplish this, the CSSADTL-BCC model primarily applies the Gaussian filtering (GF) approach to eradicate the occurrence of noise. In addition, a MixNet-based feature extraction model is employed to generate a useful set of feature vectors. Moreover, a stacked gated recurrent unit (SGRU) classification approach is exploited to allot class labels. Furthermore, CSSA is applied to optimally modify the hyperparameters involved in the SGRU model. None of the earlier works have utilized the hyperparameter-tuned SGRU model for breast cancer classification on HIs. The design of the CSSA for optimal hyperparameter tuning of the SGRU model demonstrates the novelty of the work. The performance validation of the CSSADTL-BCC model is tested by a benchmark dataset, and the results reported the superior execution of the CSSADTL-BCC model over recent state-of-the-art approaches. Full article
Show Figures

Figure 1

18 pages, 2343 KiB  
Article
Lightweight Deep Learning Model for Assessment of Substitution Voicing and Speech after Laryngeal Carcinoma Surgery
by Rytis Maskeliūnas, Audrius Kulikajevas, Robertas Damaševičius, Kipras Pribuišis, Nora Ulozaitė-Stanienė and Virgilijus Uloza
Cancers 2022, 14(10), 2366; https://doi.org/10.3390/cancers14102366 - 11 May 2022
Cited by 12 | Viewed by 2548
Abstract
Laryngeal carcinoma is the most common malignant tumor of the upper respiratory tract. Total laryngectomy provides complete and permanent detachment of the upper and lower airways that causes the loss of voice, leading to a patient’s inability to verbally communicate in the postoperative [...] Read more.
Laryngeal carcinoma is the most common malignant tumor of the upper respiratory tract. Total laryngectomy provides complete and permanent detachment of the upper and lower airways that causes the loss of voice, leading to a patient’s inability to verbally communicate in the postoperative period. This paper aims to exploit modern areas of deep learning research to objectively classify, extract and measure the substitution voicing after laryngeal oncosurgery from the audio signal. We propose using well-known convolutional neural networks (CNNs) applied for image classification for the analysis of voice audio signal. Our approach takes an input of Mel-frequency spectrogram (MFCC) as an input of deep neural network architecture. A database of digital speech recordings of 367 male subjects (279 normal speech samples and 88 pathological speech samples) was used. Our approach has shown the best true-positive rate of any of the compared state-of-the-art approaches, achieving an overall accuracy of 89.47%. Full article
Show Figures

Figure 1

Review

Jump to: Editorial, Research

23 pages, 648 KiB  
Review
Artificial Intelligence in Urooncology: What We Have and What We Expect
by Anita Froń, Alina Semianiuk, Uladzimir Lazuk, Kuba Ptaszkowski, Agnieszka Siennicka, Artur Lemiński, Wojciech Krajewski, Tomasz Szydełko and Bartosz Małkiewicz
Cancers 2023, 15(17), 4282; https://doi.org/10.3390/cancers15174282 - 26 Aug 2023
Cited by 5 | Viewed by 2196
Abstract
Introduction: Artificial intelligence is transforming healthcare by driving innovation, automation, and optimization across various fields of medicine. The aim of this study was to determine whether artificial intelligence (AI) techniques can be used in the diagnosis, treatment planning, and monitoring of urological cancers. [...] Read more.
Introduction: Artificial intelligence is transforming healthcare by driving innovation, automation, and optimization across various fields of medicine. The aim of this study was to determine whether artificial intelligence (AI) techniques can be used in the diagnosis, treatment planning, and monitoring of urological cancers. Methodology: We conducted a thorough search for original and review articles published until 31 May 2022 in the PUBMED/Scopus database. Our search included several terms related to AI and urooncology. Articles were selected with the consensus of all authors. Results: Several types of AI can be used in the medical field. The most common forms of AI are machine learning (ML), deep learning (DL), neural networks (NNs), natural language processing (NLP) systems, and computer vision. AI can improve various domains related to the management of urologic cancers, such as imaging, grading, and nodal staging. AI can also help identify appropriate diagnoses, treatment options, and even biomarkers. In the majority of these instances, AI is as accurate as or sometimes even superior to medical doctors. Conclusions: AI techniques have the potential to revolutionize the diagnosis, treatment, and monitoring of urologic cancers. The use of AI in urooncology care is expected to increase in the future, leading to improved patient outcomes and better overall management of these tumors. Full article
Show Figures

Figure 1

36 pages, 5892 KiB  
Review
AI-Powered Diagnosis of Skin Cancer: A Contemporary Review, Open Challenges and Future Research Directions
by Navneet Melarkode, Kathiravan Srinivasan, Saeed Mian Qaisar and Pawel Plawiak
Cancers 2023, 15(4), 1183; https://doi.org/10.3390/cancers15041183 - 13 Feb 2023
Cited by 39 | Viewed by 8215
Abstract
Skin cancer continues to remain one of the major healthcare issues across the globe. If diagnosed early, skin cancer can be treated successfully. While early diagnosis is paramount for an effective cure for cancer, the current process requires the involvement of skin cancer [...] Read more.
Skin cancer continues to remain one of the major healthcare issues across the globe. If diagnosed early, skin cancer can be treated successfully. While early diagnosis is paramount for an effective cure for cancer, the current process requires the involvement of skin cancer specialists, which makes it an expensive procedure and not easily available and affordable in developing countries. This dearth of skin cancer specialists has given rise to the need to develop automated diagnosis systems. In this context, Artificial Intelligence (AI)-based methods have been proposed. These systems can assist in the early detection of skin cancer and can consequently lower its morbidity, and, in turn, alleviate the mortality rate associated with it. Machine learning and deep learning are branches of AI that deal with statistical modeling and inference, which progressively learn from data fed into them to predict desired objectives and characteristics. This survey focuses on Machine Learning and Deep Learning techniques deployed in the field of skin cancer diagnosis, while maintaining a balance between both techniques. A comparison is made to widely used datasets and prevalent review papers, discussing automated skin cancer diagnosis. The study also discusses the insights and lessons yielded by the prior works. The survey culminates with future direction and scope, which will subsequently help in addressing the challenges faced within automated skin cancer diagnosis. Full article
Show Figures

Figure 1

36 pages, 5660 KiB  
Review
The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review
by Mohammad Madani, Mohammad Mahdi Behzadi and Sheida Nabavi
Cancers 2022, 14(21), 5334; https://doi.org/10.3390/cancers14215334 - 29 Oct 2022
Cited by 37 | Viewed by 9135
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, [...] Read more.
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis. Full article
Show Figures

Figure 1

Back to TopTop