Image Analysis and Machine Learning in Cancers

A special issue of Cancers (ISSN 2072-6694). This special issue belongs to the section "Methods and Technologies Development".

Deadline for manuscript submissions: 15 September 2024 | Viewed by 9249

Special Issue Editors


E-Mail Website
Guest Editor
Southern Alberta Institute of Technology (SAIT), Calgary, AB, Canada
Interests: medical imaging (mammography and digital breast tomosynthesis); machine learning and computer vision

E-Mail Website
Guest Editor
Dipartimento di Ingegneria Elettronica, University of Rome Tor Vergata, Rome, Italy
Interests: image analysis; machine learning; medical applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the years, we have seen a tremendous advance in image analysis and machine learning (ML) techniques for cancer detection. This phenomenon has been powered mainly by better equipment capturing data with higher quality, availability of public datasets, and the advances of computer technology that enable us to use methods that years ago were not possible. For example, we have seen many papers dealing with super-resolution images, fusing images from different modalities towards a better diagnosis and even generating more data based on the limited amount available.

Imaging processing techniques are the basis of all Artificial Intelligence (AI)-based systems. It has been proven that pre-processing methods cause a huge impact on the next steps in an ML/AI pipeline. Therefore, it is common to see many papers proposing new methods to enhance images even further towards a better result.

Machine learning approaches, also known as conventional approaches, have been essential in pushing the boundaries in the detection and diagnosis of cancers. Since they can work with limited datasets and more modest computers, as opposed to the deep learning approaches, many methods have been proposed since the popularization of AI. Nowadays, these methods compete equally with the DL-based ones in terms of accuracy, specificity, and sensitivity.

Deep learning (DL) techniques, followed by the advances and accessibility of more powerful hardware, have played an important role in this scenario. Since its basis lies in imaging analysis and machine learning, recent advances have shown the many possibilities to provide a good diagnosis reducing the follow-ups of patients. However, DL models are known for their lack of explainability, although some studies have proposed ways to overcome this.

This Special Issue is dedicated to covering the most recent advances in image analysis and machine learning techniques toward a better detection/diagnosis of cancers in general.

Dr. Helder C. R. De Oliveira
Dr. Arianna Mencattini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Cancers is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2900 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • cancer detection and diagnosis system
  • machine learning
  • deep learning
  • medical imaging analysis
  • few-shot deep learning
  • attention segmentation
  • feature extraction
  • probabilistic models
  • explainability
  • image fusion
  • generative adversarial network

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

35 pages, 5607 KiB  
Article
Radiomics Machine Learning Analysis of Clear Cell Renal Cell Carcinoma for Tumour Grade Prediction Based on Intra-Tumoural Sub-Region Heterogeneity
by Abeer J. Alhussaini, J. Douglas Steele, Adel Jawli and Ghulam Nabi
Cancers 2024, 16(8), 1454; https://doi.org/10.3390/cancers16081454 - 10 Apr 2024
Viewed by 1573
Abstract
Background: Renal cancers are among the top ten causes of cancer-specific mortality, of which the ccRCC subtype is responsible for most cases. The grading of ccRCC is important in determining tumour aggressiveness and clinical management. Objectives: The objectives of this research were to [...] Read more.
Background: Renal cancers are among the top ten causes of cancer-specific mortality, of which the ccRCC subtype is responsible for most cases. The grading of ccRCC is important in determining tumour aggressiveness and clinical management. Objectives: The objectives of this research were to predict the WHO/ISUP grade of ccRCC pre-operatively and characterise the heterogeneity of tumour sub-regions using radiomics and ML models, including comparison with pre-operative biopsy-determined grading in a sub-group. Methods: Data were obtained from multiple institutions across two countries, including 391 patients with pathologically proven ccRCC. For analysis, the data were separated into four cohorts. Cohorts 1 and 2 included data from the respective institutions from the two countries, cohort 3 was the combined data from both cohort 1 and 2, and cohort 4 was a subset of cohort 1, for which both the biopsy and subsequent histology from resection (partial or total nephrectomy) were available. 3D image segmentation was carried out to derive a voxel of interest (VOI) mask. Radiomics features were then extracted from the contrast-enhanced images, and the data were normalised. The Pearson correlation coefficient and the XGBoost model were used to reduce the dimensionality of the features. Thereafter, 11 ML algorithms were implemented for the purpose of predicting the ccRCC grade and characterising the heterogeneity of sub-regions in the tumours. Results: For cohort 1, the 50% tumour core and 25% tumour periphery exhibited the best performance, with an average AUC of 77.9% and 78.6%, respectively. The 50% tumour core presented the highest performance in cohorts 2 and 3, with average AUC values of 87.6% and 76.9%, respectively. With the 25% periphery, cohort 4 showed AUC values of 95.0% and 80.0% for grade prediction when using internal and external validation, respectively, while biopsy histology had an AUC of 31.0% for the classification with the final grade of resection histology as a reference standard. The CatBoost classifier was the best for each of the four cohorts with an average AUC of 80.0%, 86.5%, 77.0% and 90.3% for cohorts 1, 2, 3 and 4 respectively. Conclusions: Radiomics signatures combined with ML have the potential to predict the WHO/ISUP grade of ccRCC with superior performance, when compared to pre-operative biopsy. Moreover, tumour sub-regions contain useful information that should be analysed independently when determining the tumour grade. Therefore, it is possible to distinguish the grade of ccRCC pre-operatively to improve patient care and management. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

22 pages, 1094 KiB  
Article
Improving Skin Lesion Segmentation with Self-Training
by Aleksandra Dzieniszewska, Piotr Garbat and Ryszard Piramidowicz
Cancers 2024, 16(6), 1120; https://doi.org/10.3390/cancers16061120 - 11 Mar 2024
Viewed by 801
Abstract
Skin lesion segmentation plays a key role in the diagnosis of skin cancer; it can be a component in both traditional algorithms and end-to-end approaches. The quality of segmentation directly impacts the accuracy of classification; however, attaining optimal segmentation necessitates a substantial amount [...] Read more.
Skin lesion segmentation plays a key role in the diagnosis of skin cancer; it can be a component in both traditional algorithms and end-to-end approaches. The quality of segmentation directly impacts the accuracy of classification; however, attaining optimal segmentation necessitates a substantial amount of labeled data. Semi-supervised learning allows for employing unlabeled data to enhance the results of the machine learning model. In the case of medical image segmentation, acquiring detailed annotation is time-consuming and costly and requires skilled individuals so the utilization of unlabeled data allows for a significant mitigation of manual segmentation efforts. This study proposes a novel approach to semi-supervised skin lesion segmentation using self-training with a Noisy Student. This approach allows for utilizing large amounts of available unlabeled images. It consists of four steps—first, training the teacher model on labeled data only, then generating pseudo-labels with the teacher model, training the student model on both labeled and pseudo-labeled data, and lastly, training the student* model on pseudo-labels generated with the student model. In this work, we implemented DeepLabV3 architecture as both teacher and student models. As a final result, we achieved a mIoU of 88.0% on the ISIC 2018 dataset and a mIoU of 87.54% on the PH2 dataset. The evaluation of the proposed approach shows that Noisy Student training improves the segmentation performance of neural networks in a skin lesion segmentation task while using only small amounts of labeled data. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

16 pages, 4516 KiB  
Article
Dung Beetle Optimization with Deep Feature Fusion Model for Lung Cancer Detection and Classification
by Mohammad Alamgeer, Nuha Alruwais, Haya Mesfer Alshahrani, Abdullah Mohamed and Mohammed Assiri
Cancers 2023, 15(15), 3982; https://doi.org/10.3390/cancers15153982 - 5 Aug 2023
Cited by 3 | Viewed by 1441
Abstract
Lung cancer is the main cause of cancer deaths all over the world. An important reason for these deaths was late analysis and worse prediction. With the accelerated improvement of deep learning (DL) approaches, DL can be effectively and widely executed for several [...] Read more.
Lung cancer is the main cause of cancer deaths all over the world. An important reason for these deaths was late analysis and worse prediction. With the accelerated improvement of deep learning (DL) approaches, DL can be effectively and widely executed for several real-world applications in healthcare systems, like medical image interpretation and disease analysis. Medical imaging devices can be vital in primary-stage lung tumor analysis and the observation of lung tumors from the treatment. Many medical imaging modalities like computed tomography (CT), chest X-ray (CXR), molecular imaging, magnetic resonance imaging (MRI), and positron emission tomography (PET) systems are widely analyzed for lung cancer detection. This article presents a new dung beetle optimization modified deep feature fusion model for lung cancer detection and classification (DBOMDFF-LCC) technique. The presented DBOMDFF-LCC technique mainly depends upon the feature fusion and hyperparameter tuning process. To accomplish this, the DBOMDFF-LCC technique uses a feature fusion process comprising three DL models, namely residual network (ResNet), densely connected network (DenseNet), and Inception-ResNet-v2. Furthermore, the DBO approach was employed for the optimum hyperparameter selection of three DL approaches. For lung cancer detection purposes, the DBOMDFF-LCC system utilizes a long short-term memory (LSTM) approach. The simulation result analysis of the DBOMDFF-LCC technique of the medical dataset is investigated using different evaluation metrics. The extensive comparative results highlighted the betterment of the DBOMDFF-LCC technique of lung cancer classification. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

13 pages, 2075 KiB  
Article
Machine Learning Integrating 99mTc Sestamibi SPECT/CT and Radiomics Data Achieves Optimal Characterization of Renal Oncocytic Tumors
by Michail E. Klontzas, Emmanouil Koltsakis, Georgios Kalarakis, Kiril Trpkov, Thomas Papathomas, Apostolos H. Karantanas and Antonios Tzortzakakis
Cancers 2023, 15(14), 3553; https://doi.org/10.3390/cancers15143553 - 9 Jul 2023
Cited by 3 | Viewed by 2694
Abstract
The increasing evidence of oncocytic renal tumors positive in 99mTc Sestamibi Single Photon Emission Tomography/Computed Tomography (SPECT/CT) examination calls for the development of diagnostic tools to differentiate these tumors from more aggressive forms. This study combined radiomics analysis with the uptake of [...] Read more.
The increasing evidence of oncocytic renal tumors positive in 99mTc Sestamibi Single Photon Emission Tomography/Computed Tomography (SPECT/CT) examination calls for the development of diagnostic tools to differentiate these tumors from more aggressive forms. This study combined radiomics analysis with the uptake of 99mTc Sestamibi on SPECT/CT to differentiate benign renal oncocytic neoplasms from renal cell carcinoma. A total of 57 renal tumors were prospectively collected. Histopathological analysis and radiomics data extraction were performed. XGBoost classifiers were trained using the radiomics features alone and combined with the results from the visual evaluation of 99mTc Sestamibi SPECT/CT examination. The combined SPECT/radiomics model achieved higher accuracy (95%) with an area under the curve (AUC) of 98.3% (95% CI 93.7–100%) than the radiomics-only model (71.67%) with an AUC of 75% (95% CI 49.7–100%) and visual evaluation of 99mTc Sestamibi SPECT/CT alone (90.8%) with an AUC of 90.8% (95%CI 82.5–99.1%). The positive predictive values of SPECT/radiomics, radiomics-only, and 99mTc Sestamibi SPECT/CT-only models were 100%, 85.71%, and 85%, respectively, whereas the negative predictive values were 85.71%, 55.56%, and 94.6%, respectively. Feature importance analysis revealed that 99mTc Sestamibi uptake was the most influential attribute in the combined model. This study highlights the potential of combining radiomics analysis with 99mTc Sestamibi SPECT/CT to improve the preoperative characterization of benign renal oncocytic neoplasms. The proposed SPECT/radiomics classifier outperformed the visual evaluation of 99mTc Sestamibii SPECT/CT and the radiomics-only model, demonstrating that the integration of 99mTc Sestamibi SPECT/CT and radiomics data provides improved diagnostic performance, with minimal false positive and false negative results. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

19 pages, 15340 KiB  
Article
Al-Biruni Earth Radius Optimization with Transfer Learning Based Histopathological Image Analysis for Lung and Colon Cancer Detection
by Rayed AlGhamdi, Turky Omar Asar, Fatmah Y. Assiri, Rasha A. Mansouri and Mahmoud Ragab
Cancers 2023, 15(13), 3300; https://doi.org/10.3390/cancers15133300 - 23 Jun 2023
Cited by 4 | Viewed by 1542
Abstract
An early diagnosis of lung and colon cancer (LCC) is critical for improved patient outcomes and effective treatment. Histopathological image (HSI) analysis has emerged as a robust tool for cancer diagnosis. HSI analysis for a LCC diagnosis includes the analysis and examination of [...] Read more.
An early diagnosis of lung and colon cancer (LCC) is critical for improved patient outcomes and effective treatment. Histopathological image (HSI) analysis has emerged as a robust tool for cancer diagnosis. HSI analysis for a LCC diagnosis includes the analysis and examination of tissue samples attained from the LCC to recognize lesions or cancerous cells. It has a significant role in the staging and diagnosis of this tumor, which aids in the prognosis and treatment planning, but a manual analysis of the image is subject to human error and is also time-consuming. Therefore, a computer-aided approach is needed for the detection of LCC using HSI. Transfer learning (TL) leverages pretrained deep learning (DL) algorithms that have been trained on a larger dataset for extracting related features from the HIS, which are then used for training a classifier for a tumor diagnosis. This manuscript offers the design of the Al-Biruni Earth Radius Optimization with Transfer Learning-based Histopathological Image Analysis for Lung and Colon Cancer Detection (BERTL-HIALCCD) technique. The purpose of the study is to detect LCC effectually in histopathological images. To execute this, the BERTL-HIALCCD method follows the concepts of computer vision (CV) and transfer learning for accurate LCC detection. When using the BERTL-HIALCCD technique, an improved ShuffleNet model is applied for the feature extraction process, and its hyperparameters are chosen by the BER system. For the effectual recognition of LCC, a deep convolutional recurrent neural network (DCRNN) model is applied. Finally, the coati optimization algorithm (COA) is exploited for the parameter choice of the DCRNN approach. For examining the efficacy of the BERTL-HIALCCD technique, a comprehensive group of experiments was conducted on a large dataset of histopathological images. The experimental outcomes demonstrate that the combination of AER and COA algorithms attain an improved performance in cancer detection over the compared models. Full article
(This article belongs to the Special Issue Image Analysis and Machine Learning in Cancers)
Show Figures

Figure 1

Back to TopTop