Next Article in Journal
Advances for Pulmonary Functional Imaging: Dual-Energy Computed Tomography for Pulmonary Functional Imaging
Next Article in Special Issue
Artificial Intelligence-Based Hyper Accuracy Three-Dimensional (HA3D®) Models in Surgical Planning of Challenging Robotic Nephron-Sparing Surgery: A Case Report and Snapshot of the State-of-the-Art with Possible Future Implications
Previous Article in Journal
Utility of FDG PET/CT in Patient with Synchronous Breast and Colon Cancer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence in Renal Cell Carcinoma Histopathology: Current Applications and Future Perspectives

by
Alfredo Distante
1,2,*,
Laura Marandino
3,
Riccardo Bertolo
4,
Alexandre Ingels
5,
Nicola Pavan
6,
Angela Pecoraro
7,
Michele Marchioni
8,
Umberto Carbonara
9,
Selcuk Erdem
10,
Daniele Amparore
7,
Riccardo Campi
11,
Eduard Roussel
12,
Anna Caliò
13,
Zhenjie Wu
14,
Carlotta Palumbo
15,
Leonardo D. Borregales
16,
Peter Mulders
2 and
Constantijn H. J. Muselaers
2 on behalf of the EAU Young Academic Urologists (YAU) Renal Cancer Working Group
1
Department of Urology, Catholic University of the Sacred Heart, 00168 Roma, Italy
2
Department of Urology, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA Nijmegen, The Netherlands
3
Department of Medical Oncology, IRCCS Ospedale San Raffaele, 20132 Milan, Italy
4
Department of Urology, San Carlo Di Nancy Hospital, 00165 Rome, Italy
5
Department of Urology, University Hospital Henri Mondor, APHP (Assistance Publique—Hôpitaux de Paris), 94000 Créteil, France
6
Department of Surgical, Oncological and Oral Sciences, Section of Urology, University of Palermo, 90133 Palermo, Italy
7
Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, 10043 Turin, Italy
8
Department of Medical, Oral and Biotechnological Sciences, G. d’Annunzio University of Chieti, 66100 Chieti, Italy
9
Andrology and Kidney Transplantation Unit, Department of Emergency and Organ Transplantation-Urology, University of Bari, 70121 Bari, Italy
10
Division of Urologic Oncology, Department of Urology, Istanbul University Istanbul Faculty of Medicine, Istanbul 34093, Turkey
11
Urological Robotic Surgery and Renal Transplantation Unit, Careggi Hospital, University of Florence, 50121 Firenze, Italy
12
Department of Urology, University Hospitals Leuven, 3000 Leuven, Belgium
13
Section of Pathology, Department of Diagnostic and Public Health, University of Verona, 37134 Verona, Italy
14
Department of Urology, Changhai Hospital, Naval Medical University, Shanghai 200433, China
15
Division of Urology, Maggiore della Carità Hospital of Novara, Department of Translational Medicine, University of Eastern Piedmont, 13100 Novara, Italy
16
Department of Urology, Well Cornell Medicine, New York-Presbyterian Hospital, New York, NY 10032, USA
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(13), 2294; https://doi.org/10.3390/diagnostics13132294
Submission received: 31 May 2023 / Revised: 1 July 2023 / Accepted: 4 July 2023 / Published: 6 July 2023

Abstract

:
Renal cell carcinoma (RCC) is characterized by its diverse histopathological features, which pose possible challenges to accurate diagnosis and prognosis. A comprehensive literature review was conducted to explore recent advancements in the field of artificial intelligence (AI) in RCC pathology. The aim of this paper is to assess whether these advancements hold promise in improving the precision, efficiency, and objectivity of histopathological analysis for RCC, while also reducing costs and interobserver variability and potentially alleviating the labor and time burden experienced by pathologists. The reviewed AI-powered approaches demonstrate effective identification and classification abilities regarding several histopathological features associated with RCC, facilitating accurate diagnosis, grading, and prognosis prediction and enabling precise and reliable assessments. Nevertheless, implementing AI in renal cell carcinoma generates challenges concerning standardization, generalizability, benchmarking performance, and integration of data into clinical workflows. Developing methodologies that enable pathologists to interpret AI decisions accurately is imperative. Moreover, establishing more robust and standardized validation workflows is crucial to instill confidence in AI-powered systems’ outcomes. These efforts are vital for advancing current state-of-the-art practices and enhancing patient care in the future.

1. Introduction

Renal cell carcinoma (RCC) is among the top 10 most common cancers in both men and women. The incidence of RCC has gradually risen in recent years, resulting in increased time-, effort-, and cost-related demands on healthcare systems [1]. Adequate RCC diagnosis and treatment planning relies on adequate clinical data, imaging, histology, and molecular profiling [2,3].
Histological analysis, which is supported by genetic and cytogenetic analysis, is crucial for RCC diagnosis, as well as subtyping and defining features with high prognostic and therapeutic impact [4,5]. These features include tumor grade, RCC subtype, lymphovascular invasion, tumor necrosis, sarcomatoid dedifferentiation, etc. [6,7,8]. RCC histological diagnosis and classification, in particular, can be a daunting task, as it encompasses a broad spectrum of histopathological entities, which have recently been subject to changes [9,10].
Over the years, the daily clinical practice of treating patients with RCC has changed from using paper charts, analog radiographs, and light microscopes to using more modern counterparts, such as electronic health records and digitalized radiology and virtual pathology. This shift has generated an enormous amount of digital data, which can be utilized in data-characterization algorithms or artificial intelligence (AI) [11,12].
The use of AI in radiology, which is also known as radiomics, has shown excellent diagnostic accuracy for detecting RCC and can even provide information regarding RCC subtyping, nuclear grade prediction, gene mutations, and gene expression-based molecular signatures [13]. In line with AI in radiology, efforts to use AI in RCC histopathology have been undertaken in recent years. This relatively new field, which is called pathomics or computational pathology, can be used to improve efficiency, accessibility, cost-effectiveness, and time consumption, as well as enhance accuracy and reproducibility with lower subjectivity [11,14,15,16,17]. In addition, Whole Slide Imaging (WSI) technology allows machine learning in pathology by providing an enormous amount of high-quality information for training and testing AI models to identify specific features and patterns that can be complex for even the human eye to discern [12,18,19]. Ultimately, AI aims to assist pathologists in making more accurate and consistent diagnoses in shorter periods of time and is a valuable implement to undercover the above-cited information [20,21].
In this literature review, we aim to provide an overview of the current evidence regarding the use of computational pathology in histopathology in RCC. Our review aims to evaluate the potential prospects for implementing this emerging technology in everyday practice by comparing and analyzing its advantages and possible drawbacks, as well as bottlenecks that may hinder its development. Furthermore, we explore how this intriguing new technology can aid pathologists in making their work less time consuming, more standardized, and cost effective

2. Evidence Acquisition

We conducted a narrative review of the literature concerning all possible applications of AI in the histo-pathological analysis of RCC specimens.
The Medline database was screened, and literature research was restricted to articles published in English between 1 January 2017, and 1 January 2023, since most of the relevant literature in this field was published in this timeframe.
We used a structured search strategy (Supplementary Material), obtaining 98 results that were reviewed, and references to the retrieved articles were hand-searched to identify additional reports that met the scope of this review.
Original studies and case series were selected for inclusion, while reviews, editorials, and letters to editors were excluded. Finally, references to the retrieved articles were hand-searched to identify additional reports that met the scope of this review.
The titles and abstracts of all papers included were independently assessed against the inclusion and exclusion criteria using Rayyan (Rayyan Systems, Cambridge, MA, USA).

3. Basics of Artificial Intelligence and Its Application in Histopathology

Machine learning (ML) is a subfield of AI that uses algorithms that enable computers to learn from digital images of tissue samples. In histo-pathology, it can be used for many tasks, such as digital analysis of images of tissue samples, identification of different structures or cell types, and classification or segmentation of different regions in the tissue sample [22]. The capabilities of ML increased with the development of deep learning (DL), with a section of ML now focused on creating a virtual neural network with multiple layers inspired by the ways through which biological neurons communicate [23]. DL models are well-suited for feature extraction and learning from data because they can automatically identify complex patterns and relationships within large and diverse datasets, such as those used in cancer diagnostics (Definition in Box 1).
Choosing the best algorithm for AI applications in histo-pathology is still challenging. There are three primary types of learning: supervised learning, which uses labeled data for training; unsupervised learning, which finds patterns without labels; and weakly supervised learning, which strikes a medium ground through use of partially labeled data.
In histo-pathological practice, there are numerous time-consuming and repetitive tasks, such analysis of high-volume biopsy tissue samples from breast, prostate, colon, and cervix due to screening programs, as well as finding a large quantity of resected lymphnodes during routine surgical procedures. AI has the potential to flag suspicious regions for inspection and may eventually enable autonomous case assessment.
In addition, AI can help pathologists to complete classification tasks, like highlighting regions of prostate cancer using different colors to represent different Gleason grades [24,25].
Moreover, combining segmentation, detection, and classification techniques makes it possible to objectively quantify established biomarkers utilized in clinical practice. Specific instances are the evaluation of tumor-infiltrating lymphocytes [26] and the quantification of programmed death-ligand 1 (PD-L1)-positive cells [27], which can even be predicted directly via slides [28].
Therefore, AI can be utilized for tasks such as tumor detection and classification, including subtyping, image segmentation, tumor grading, and predictive/prognostic modeling, within the field of histopathology.
Box 1. Definitions.
Machine learning:
Machine learning is a specific branch of artificial intelligence, based on algorithms that enable computer systems to learn, make predictions, and decisions based on data, without the need for explicit programming instructions to do so.
Whole-slide images:
Digital representations of entire microscope slides created by scanning glass slides with high-resolution scanners.
Deep learning:
A subfield of machine learning where algorithms are trained for a task or set of tasks by subjecting a multi-layered artificial neural network to a training data. It eliminates the need for manual feature engineering by allowing the networks to learn directly from raw input data during the training process. The acquired algorithm is subsequently utilized for tasks such as classification, detection, or segmentation. The term "deep" refers to the use of artificial neural networks comprising numerous layers, thus referred to as deep neural networks.
Convolutional neural network:
In deep learning, a class of artificial nural network consisting of convolutional of a sequence of convolutional layers to process an input data and produce an output. Each layer implements the convolution operation between the input data and a set of filters. These filter values are learned automatically during training, allowing the network to extract relevant features from the data in an end-to-end fashion (learning the optimal value of all parameters of the
model simultaneously rather than sequentially)
Digital pathology:
The process of digitizing the conventional diagnostic approach. It is accomplished through the utilization of whole-slide scanners and computer screens
Pathomics:
The analysis by computational algorithms of digital pathology data, to extract meaningful features. These features are then used to build models for diagnostics, prognostics, and therapeutics purposes
Computational pathology:
Computational analysis of digital images acquired by scanning pathology slides
Image segmentation:
The process of dividing a digital pathology image into distinct regions or objects of interest (for example nuclei or tumor region) to enable analysis and extraction of specific features.

4. Artificial Intelligence Aided Diagnosis of RCC Subtypes

Although several advances have been made in RCC diagnostics in the last decade, especially in imaging techniques, histo-pathological diagnosis based on a pathologist’s skill and experience remains the standard clinical practice used to distinguish RCC from normal renal tissue at the microscopic level [13,29,30,31].
However, RCCs can have complicated characteristics that make the diagnosis difficult, laborious, and time consuming, even for experienced pathologists. These issues are known to lead to a moderate inter-reader agreement for the RCC subtype [32,33,34]. In addition, several studies demonstrated that computational pathology could be a solution to more uniform specimen readings and reduce intra- and inter-observer variability [35,36,37].

4.1. RCC Diagnosis and Subtyping in Biopsy Specimens

RCC varies in its biological behavior, ranging from indolent to aggressive tumors. Currently, no reliable predictive models that distinguish between different clinical types are available for use in the pre-operative setting, creating concerns about under- and over-treatment, especially in small renal masses (SRMs), which now represent up to 50% of renal lesions [38,39,40,41,42]. Therefore, this issue can lead to overdiagnosis and overtreatment. To date, there are no highly reliable biomarkers or imaging methods that can correctly differentiate between benign and malignant lesions [43,44,45] As a result, there has been a growing trend of using renal mass biopsy (RMB) to address this challenge over the past decade [46,47,48].
However, RMBs have some limitations as they are non-diagnostic in approximately 10–15% of the cases and remain intrinsically invasive [49]. The main reason for the high percentage of non-diagnostic results is inadequate sampling of tumors [50]. Another crucial issue in RMB is a fair degree of interobserver variability [51], a concern that is also found in breast, prostate, and melanoma biopsies [52,53,54].
To tackle these problems, Fenstermaker et al. developed a DL-based algorithm for RCC diagnosis, grading, and subtype assessment [55]. Their method reached a high accuracy level when using only a 100 square micrometers (µm2) patch, making it a potentially valuable tool in RMB analysis. In addition, although their method was trained on whole-mount surgical specimens, a computational method trained and tested on small tissue samples may reduce the need for repeat biopsies by decreasing insufficient tissue sampling and reducing interobserver variability.
However, this study focused on identifying the three main subtypes of RCC without considering benign tumors or oncocytomas. A significant proportion of small renal masses (SRMs) are benign, with oncocytoma being the most frequent benign contrast-enhancing renal mass found. A well-known problem faced by pathologists is differentiating oncocytomas from chromophobe RCC [56,57,58]. Zhu et al. reported favorable results in RCC subtyping in surgical resection and RMB specimens, as well as promising results in oncocytoma diagnosis in RMB [59]. The group trained and tested a model on an internal dataset of renal resections. In addition, they tested this model on 79 RCC biopsy slides, 24 of which were diagnosed as renal oncocytoma, and an external dataset, achieving good performance, as shown in Table 1.

4.2. RCC Diagnosis and Subtyping in Surgical Resection Specimens

Despite the recent increased use of RMB and enormous advances in diagnostic accuracy [60,61], approximately 73% of surveyed urologists would not perform a RMB for various reasons [62]. Currently, the standard of treatment for non-metastatic RCC is surgical resection, carried out via either a radical or partial nephrectomy; this technique was also used in some selected cases of metastatic RCC [63,64]. However, examining and analyzing the complex histological patterns of RCC surgical resection specimens under a microscope can be challenging and time consuming for pathologists for many reasons. For instance, nephrectomy specimens exhibit substantial heterogeneity, exemplifying the wide variation observed within RCC surgical resection samples [65]. Moreover, variability among different observers, and even within the same observer, has been reported [33].
Good results were obtained by Tabibu et al. in terms of distinguishing between ccRCC and chRCC and normal tissue using two pre-trained convolutional neural networks (CNN) and replacing the final layers with two output layers, which were fine-tuned using RCC data [66]. Moreover, for subtype classification, the group introduced a so-called directed acyclic graph support vector machine (DAG-SVM) on top of the deep network, obtaining good accuracy in this task. Unlike Tabibu et al.’s model, Chen et al. developed a DL algorithm to detect RCC that was externally validated on an independent dataset [67]. To accomplish this task, they used LASSO (least absolute shrinkage and selection operator), which is a method used in ML to select from a more extensive set of features, i.e., the most important in predicting outcomes. Through LASSO analysis, they identified various image features based on the “The Cancer Genome Atlas” (TCGA) cohort to distinguish between ccRCC and normal renal parenchyma, as well as ccRCC and pRCC and chRCC, obtaining high accuracy in test and external validation cohorts.
Also, Marostica et al. created a pipeline using transfer learning to identify cancerous regions from slide images and classify the three major subtypes, obtaining good performance in both the test set and two external independent datasets (Table 3) [68].
RCC classification is a challenging task not only due to the complexity of the procedure itself, but also because the classification system is subject to periodic updates [69,70]. For example, only in recent years has clear cell papillary renal cell carcinoma (ccpRCC) been recognized as a specific entity [4]. This subtype of RCC histologically resembles both ccRCC and pRCC, and it has clear cell changes. However, ccpRCC has distinct immuno-histochemical and genetic profiles compared to ccRCC and pRCC [71]. It also carries a favorable prognosis relative to the latter carcinoma; therefore, the World Health Organization recently changed its denomination to a clear cell papillary renal cell tumor [72]. Abdeltawab et al. developed a computational model that could distinguish between ccRCC and ccpRCC, obtaining an accuracy of 91% in identifying ccpRCC using the institution files and 90% in diagnosing ccRCC using an external dataset [73].
Table 1. Overview of studies of AI models for diagnosis and subtyping.
Table 1. Overview of studies of AI models for diagnosis and subtyping.
GroupAimNumber of PatientsTraining ProcessAccuracy on the Test SetExternal Validation (N of Patients)Accuracy on the External Validation CohortAlgorithm
Fenstermaker et al. [55](1) RCC diagnosis,
(2) subtyping,
(3) grading
(1) 15 ccRCC;
(2) 15 pRCC;
(3) 12 chRCC.
No significant error decrease in 25 epochs in training was recorded. Next, a validation dataset was used. Training was halted when the performance on the validation set ceased to improve.(1) 99.1%;
(2) 97.5%;
(3) 98.4%
N.A.N.A.CNN: 6 different convolutional layers, 2 layers of 32 filters, 2 layers of 64 filters, and 2 layers of 128 filters.
Zhu et al. [59]RCC subtyping(1) 486 SR (30 NT, 27 RO, 38 chRCC, 310 ccRCC, 81 pRCC);
(2) 79 RMB (24 RO, 34 ccRCC, 21 pRCC).
The models were trained for 40 epochs. The trained model assigned a confidence score for each patch. Finally, a comparison of the trained models was completed.(1) 97% on SRS,
(2) 97% on RMB
0 RO
109 ChRCC
505 ccRCC
294 pRCC:
95% accuracy
(only SRs)
DNN: we tested four versions of ResNet: ResNet-18, ResNet-34, ResNet-50, and ResNet-101. ResNet-18 was selected for the highest average F1-score on the developement set (0.96)
Chen et al. [67](1) RCC diagnosis,
(2) subtyping,
(3) survival prediction
(1) and (2) 362 NT, 362 ccRCC, 128 pRCC, 84 chRCC;
(3) 283 ccRCC.
LASSO was used to identify RCC-related digital pathological factors and their coefficients in the training cohort. LASSO–Cox regression was used to identify survival-related digital pathological factors and their coefficients in the training cohort.(1) 94.5% vs. NT
(2) 97% vs. pRCC and chRCC
(3) 88.8%, 90.0%, 89.6% in 1–3–5 y DFS
(1) and (2) 150 NP,
150 ccRCC,
52 pRCC, and
84 chRCC;
(3) 120ccRCC.
(1) 87.6% vs. NP;
(2) 81.4% vs. pRCC and chRCC;
(3) 72.0%, 80.9%, 85.9% in 1-, 3-, or 5-year DFS.
Segmentation and feature extraction pipeline via CellProfiler:
(1) and (2) LASSO;
(2) LASSO–Cox regression analysis
Tabibu et al. [66](1) RCC diagnosis;
(2) subtyping,
(1) 509 NT;
(2) 1027 ccRCC;
(3) 303 pRCC;
(4) 254 chRCC.
Training was terminated when validation accuracy stabilized for 4–5 epochs. Data augmentation included random patches, vertical flip, rotation, and noise addition. Weighted resampling was used to address class imbalance. Training parameters remained unchanged.(1) 93.9% ccRCC vs. NP
87.34% chRCC vs. NP
(2) 92.16% subtyping
N.A.N.A.CNN (Resnet 18 and 34 architecture based); DAG-SVM on top of CNN for subtyping.
Abdeltawab et al. [73]RCC subtyping(1) 27 ccRCC;
(2) 14 ccpRCC.
Each image was divided into overlapping patches of different sizes for feature recognition at different sizes. Multiple CNNs outperformed a single CNN for learning features at different scales. Patch overlap of 50% for learning from diverse viewpoints. 91% in ccpRCC10 ccRCC.90% in ccRCCThree CNNs were used for small, medium, and large patch sizes. The CNNs shared the same architecture: a series of convolutional layers intervened by max-pooling layers, followed by two fully connected layers. Finally, there was a soft-max layer
ccRCC = clear cell renal cell carcinoma, ccpRCC = clear cell papillary renal cell carcinoma, chRCC = chromophobe renal cell carcinoma, CNN = convolutional neural network, DAG-SVM = directed acyclic graph–support vector machine, DFS = disease-free-survival, DNN = deep neural network, LASSO = least absolute shrinkage and selection operator, N.A. = not applicable, NT = normal tissue, pRCC = papillary renal cell carcinoma, ResNet = residual neural network, RMB = renal mass biopsy, SR = surgical resection.
The abovementioned studies were mainly supervised and highly defined for RCC approaches, making them time consuming to conduct. However, the capability to apply knowledge gained from previous experiences to novel situations is a vital skill among human beings. For example, pathologists can use lessons learned outside of their specific subspecialty because several cancer types exhibit common hallmarks of malignancy, as demonstrated by Faust et al., who tested whether a previously trained AI system developed to recognize brain tumor features could be applied to clusters and analyze RCC specimens in an unsupervised fashion [74]. The results showed that grouping cancer regions from non-neoplastic tissue elements matched expert annotations in multiple randomly selected cases. This result, hypothetically, represents a way to demonstrate that unsupervised ML-based methods, which were built for the diagnosis of other cancers, can also be used to diagnose RCC, reducing development and work time.

5. Pathomics in Disease Prognosis

The prognosis for RCC depends on several factors, including anatomical and clinical factors, while histological and molecular factors play important prognostic roles in both non-metastatic disease and mRCC [75].

5.1. Cancer Grading

Tumor grading is considered to be one of the most critical factors in prognosis prediction, as the 5-year survival rate for patients with low-grade RCC is around 90%, while in high-grade RCC, the survival rate is about 12% [75,76,77].
Although largely replaced by the WHO/ISUP grading classification method, the Fuhrman grading system still acts as an independent factor in determining a higher risk of recurrence and a lower chance of survival [78,79,80,81,82]. The Fuhrman grading system predominantly focuses on the morphology of the nucleus (size and shape) and the existence of prominent nucleoli, though inter- and intra-observer variability is a serious issue [33,37,83]. Yeh et al. trained a support vector machine (SVM) classifier that performed effectively in identifying, size-estimating, and calculating spatial distribution, as well as distinguishing between low and high grades on ccRCC specimens [84]. However, it could not differentiate between specific grades (e.g., III and IV), and no analyses of patients’ likelihood of survival were presented.
Unlike the Fuhrman grading system, the WHO/ISUP system relies solely on nucleolar prominence for grade 1–3 tumors, allowing lower inter-observer variation [85]. Therefore, Holdbrook et al. developed a model that detected prominent nucleoli and quantified nuclear pleomorphic patterns by concatenating features (i.e., combining different features (or variables) into a single input representation for the model) extracted from prominent nucleoli and classifying them as either high- or low-grade features [86]. The model also showed excellent grade classification accuracy and prognosis prediction by comparing these results to a multigene score.
The aforementioned computational systems have many unique features, like image processing, feature extraction, classification method, and predicting two-tiered grades (which demonstrated effective performance in cancer-specific-survival (CSS) prediction). [87]. Tian et al. used 395 ccRCC cases from the TCGA dataset reviewed by a pathologist and stratified via the two-tiered system: low- or high-grade features [88]. Of these features, 277 had concordance between the TCGA and the pathologist’s assigned grade and were used to train the model by extracting different histomic features for each patch. They used LASSO regression to select the features most associated with different grades, obtaining a model that predicted two-tiered ccRCC grading in good agreement with manual grades. It also showed a significant association between the predicted grade and overall survival, even when adjusting for age and gender. Furthermore, the model’s predicted grade was superior in terms of overall survival prediction to TCGA and pathologist grade in discordant cases. This study was different from those of Yeh et al. [84], who only evaluated one feature (i.e., maximum nuclei size) to predict the two-tiered grade, and Holdbrook et al. [86], who used up to four concatenate feature vectors to calculate F-scores before classifying features into low or high grade. The features used in the model of Holdbrook et al. [86] are unspecified.
In addition, Tian et al. and Holdbrook et al. showed that the predicted grade had prognostic value, whereas Yeh et al. did not report any association between their grade and prognosis.
Tian et al.’s study used a conventional image analysis technique for nuclei segmentation. However, DL-based techniques for nuclei segmentation might be viable solutions, as shown by the methods of Yeh et al. and Song et al., to this task [84,89]. The results of the studies mentioned above are summarized in Table 2.

5.2. Molecular-Morphological Connections and AI-Based Therapy Response Prediction

Recent developments in predicting RCC survival suggest that molecular differences within subtypes affect prognosis, as well as potentially predictive molecular biomarkers and marker signatures, even though there is no definitive evidence to date supporting the routine clinical use of biomarkers for treatment selection in metastatic RCC (mRCC) [90,91,92,93,94,95].
As the finding of predictive biomarkers still represents an unmet clinical need, AI can be used to explore connections between molecular biomarkers and morphological features on histopathology images, thus overcoming traditional biomarker analysis limitations, such as the high cost (both financially and in terms of time), limited sample size, and lack of standardization [96,97,98,99].
Among the many possible genetic aberrations in RCC, one crucial type of mutation are copy number alterations (CNAs), which are associated with an RCC’s development, treatment response, and prognosis [100,101]. Marostica et al. used transfer learning to develop CNAs and somatic mutation image-based prediction models. They demonstrated that CNAs in several genes, including KRAS, EGFR, and VHL, could affect quantitative histopathology patterns [68]. Furthermore, the group leveraged a framework to predict ccRCC tumor mutational burden, which is a potential yet controversial biomarker for immune checkpoint blockade response [102], and obtained good performances on this task. It is important to note that this approach was weakly supervised and did not need a slide-level label with detailed region or pixel-level segmentation, making it readily applicable for clinical use.
Although immunotherapy has changed the field of mRCC over the last years, TKI monotherapy still plays an essential role in treating patients who are unable to receive or tolerate checkpoint inhibitors as a later-line therapy [75,103]. Go et al. developed an ML-based method to identify which mRCC patients will respond to VEGFR-TKI treatment by analyzing clinical, pathology, and molecular data from 101 patients [104]. Specimens of the primarily resected tissue were collected and retrospectively divided into clinical and non-clinical benefit groups. The authors developed a predictive classifier and obtained a prediction accuracy of 0.87.
As stated, gene expression signatures are commonly used as predictive biomarkers. Endothelial cells and vascular architecture are known to play roles in the biological behavior of the tumor [105]. Ing et al. used ML to analyze tumor vasculature to gather prognostic insights [106]. They used ccRCC cases from the TCGA database to train their algorithm and discovered that nine vascular features correlated with clinical outcomes. They found that four of these features had more significant variation in individuals with poor outcomes than favorable outcomes, linking variation in vascular structure to worse results. Ing et al. identified 14 genes that correlated strongly with these features and built 2 ML-based models with satisfactory prediction outcomes comparable to those of traditional gene signatures. Further efforts are needed to develop models using morphologic and genomic biomarkers to improve patients’ prognosis and treatment options.
Another active area of RCC research is the field of epigenetics [107,108,109,110,111]. Zheng et al. investigated possible interactions between histopathologic features and epigenetic changes in RCC [112]. Using morphometric features extracted from histopathological images, they employed ML models to accurately forecast differential methylation values for specific genes or gene clusters. Furthermore, prospective studies are needed to predict the mechanisms underlying cancer progression using predicted genes [113]. The results of the studies mentioned above are summarized in Table 3.

5.3. Prognosis Prediction Models Based on Computational Pathology

In the past, several models were developed and externally validated for the prediction of the prognosis of RCC patients. These models, which are currently used for both localized and metastatic RCC, are mainly based on clinicopathological data, both for localized and mRCC cases [114,115,116,117]. Currently, the prognostic models of localized ccRCC mainly include the Leibovich score [116] and the UISS score [117]. The latter score is primarily based on clinicopathological data, making the pathologist’s subjective experience a limitation of their performances [118,119]. All mentioned models incorporate clinical parameters within their framework; however, models based exclusively on pathological data have been validated [120], Regarding mRCC, risk groups assigned via the Memorial Sloan Kettering Cancer Center (MSKCC) and the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) may differ in up to 23% of cases [75]. Although these models have shown reasonably good performance in the past, there is still room for improvement [121]. AI multimodal approaches applied to medical issues can raise accuracy by up to 27.7% compared to a single modality [122]. Specifically, integrating an ML-based algorithm that predicts RCC survival from histopathology to other known prognosis modalities improved prediction accuracy in multiple studies [123,124].
Cheng et al. was the first study to combine features from the gene data and histopathologic data for ccRCC prognosis [125], thus generating a risk index strongly correlated with survival and outperforming predictions based on separate consideration of morphologic features or eigengenes. The predicted risk could also stratify early-stage patients (stage I and II), whereas no significant difference in survival outcomes when using stage alone was recorded. In Cheng et al.’s study, microenvironment and radiologic imaging information were not integrated into the prognostic model. At the same time, the latter feature proved to be the single modality with the best predictive performance in a computational method presented by Ning et al. This method combined features extracted from CT, histopathological images, and clinical and genomic data [126]. However, Ning et al.’s method also had limitations, such as a small sample size and a lack of external validation. Another algorithm used by Chen et al. was trained on ccRCC images from the TCGA cohort and validated on Shangai General Hospital images to identify substantial survival-related digital pathological factors and combine them with clinico-pathological factors (age, stage, and grade) [67]. The integration nomogram developed in that study showed good ability in predicting 1- 3- and 5-year DFS (Table 1). The study also defined the cut-off value for high- and low-risk scores as the median score for each cohort. Therefore, external validation using a larger cohort or a prospective study would be necessary to confirm the novel computational recognition model’s validity and determine the optimal cut-off value for high- and low-risk scores.
Another study by Schulz et al. reported on a multimodal deep learning model trained on multiscale histopathological images, CT/MRI scans, and genomic data from whole exome sequencing [127]. The model showed excellent performance in terms of 5-year survival status prediction, as it outperformed other parameters (T-stage, N-stage, M-stage, and grading). They also investigated the possibility of predicting the 5-year survival status by obtaining a significant difference in the survival curves after dividing the cohorts into low- and high-risk patients, even after evaluating only M0 or M + patients. However, this study had the following limitations: it needed to compare other clinical tools that consider factors such as performance status and calcium levels incorporated in the current, which are widely used prognostic models; the external validation sample size was relatively small; and further research is required to confirm the generalizability of the authors’ approach.
The above-mentioned and future models should be externally validated, used in prospective cohorts, and compared to current prognostic models regarding discrimination, calibration, and net benefit [75]. The results of the studies mentioned above are summarized in Table 4.

6. Future Perspectives

According to currently available data, AI and ML in RCC pathology (‘pathomics’) hold promise for the future, as they might help us to overcome several problems in classic histopathology, such as intra- and inter-observer variability and time consumption. Currently, several AI methods can be reliable in RCC diagnosis and, on some occasions, appear capable of predicting clinical outcomes in a few seconds. This capability could be of great help for pathologists in times in which the incidence of RCC is still rising. However, this exciting field is still relatively new and not without teething troubles, both in general and specifically within the realm of RCC [128,129].
In this review, we reported on the excellent results achieved using AI in several tasks, like staging and grading. Supervised learning methods efficiently perform these tasks but cannot be visually authenticated. In simple terms, the machine generates an answer (i.e., low or high grade or subtype) according to its learned algorithms, which humans cannot survey. These algorithms are often referred to as black box algorithms [130]. This problem makes them prone to doubt by the pathology community, as the pathologist must have faith in the findings before approving and discussing a report in multidisciplinary meetings [131]. One possible solution might be creating tools that bring transparency to non-linear machine learning techniques. For instance, gradient-weighted class activation mapping (grad-CAM) is a tool that can overlay images and heatmaps to improve visualization of the cell type or region in which the informative features were expressed [132]. Another possible solution can be “searching and matching”, instead of “classifying” in an unsupervised fashion, which the group of Faust et al. used for RCC diagnosis [74]. With unsupervised learning, computers can search and cluster images with matching features in a dataset without labeling the data, which can be labor-intensive and potentially biased [133]. This method more or less resembles the current workflow, as pathologists often use atlases to compare images found in the specimen to judge if they match certain previously described conditions. Alternatively, asking other experts for a second opinion may be useful. However, this approach does not exclude the intervention of human experts since a pathologist still needs to inspect and interpret the images visually.
Another possible drawback of computational pathology is the current lack of generalization due to potentially biased inputs used in the training processes of models. For example, using cross-validation, ML models are validated using a set different from the training set, which can lead to biased evaluation if the input data are biased. Therefore, a recommended step before model training is to always check for any potential sample bias and assess whether there may be any issues related to sample size [134,135], heterogeneity [136], noise [137], and confounding factors [138].
Moreover, supposing the data are derived from one pathology laboratory, the algorithm may only be able to account for some variations and artifacts arising from different institutions. For example, the color distribution of WSIs varies across different pathology laboratories due to the staining process.
Once the data are adequately processed, the model is trained using the training set, and its performance is evaluated using the validation set. The so-called ‘overfitting’ can occur when a model is so finely tuned to a particular dataset that it fails to generalize well to new and unseen data. Overfitting is akin to memorizing answers to a test rather than understanding the material. Once the training process is complete, the final performance of the model is evaluated using the test set, which contains data that the model has not seen before that moment. This final evaluation estimates the model’s performance using new and unseen data [139]. But, if the model is overfitting, it can still perform well if the data are derived from the same laboratory.
This approach leads to inter-center variability that impacts the accuracy of machine learning algorithms used to automatically analyze WSIs. This issue affects state-of-the-art CNN-based algorithms, which often exhibit reduced performance when applied to images from a different center than that on which they were trained [22,23,140,141]. Therefore, a global standard for tissue processing, staining, slide preparation in surgical pathology, and even digital acquisition would be greatly helpful [142]. Existing solutions to reduce generalization error in this setting can be categorized into stain color augmentation and stain color normalization, with ML-based methods that perform stain color normalization using a neural network being proposed [143]. One of the most effective methods to mitigate overfitting is external validation, which involves testing the method on a group of new patients distinct from the initial set, thus assessing the model’s generalization ability [20].
The critical evidence for generalizability would be introducing external validation. Any features selected based on idiosyncrasies in the original training data, such as technical or sampling biases, would likely not function properly. As a result, adequate performance while using a reasonably extensive external validation set is seen as evidence of a model’s generalizability (Figure 1 and Figure 2) [144].
Additionally, it is important to note that, as stated above, radiomics showed promising results in different tasks, in particular in diagnosing and subtyping tasks. Many studies used histopathology results as the reference standard to evaluate the radiomic model [145]. Over the past decade, computational pathology research experienced a shift in focus. Initially, the aim of research was to replicate the diagnostic process already conducted by pathologists. However, the most recent literature witnessed a move towards uncovering and exploring “sub-visual” prognostic image cues derived from histopathological images.
Radiomics involves the extraction of computational features that quantify tissue heterogeneity at the macroscopic level by leveraging ML. In contrast, pathomics focuses on providing quantitative information at the micro scale. The fusion of radiomics and pathomics can offer, in the future, an opportunity to combine tumor heterogeneity at both the macro and micro scales, potentially enhancing the integrated signature through complementary insights [146].
To conclude, AI is a promising tool that remains under investigation in relation to the diagnosis, grading, prognosis assessment, and treatment of kidney neoplasms. Results of new AI algorithms are encouraging since they are either on par with or outperform current state-of-the-art methods. However, most available technologies are currently unavailable for widespread clinical use, and further evidence is needed regarding their efficacy. Therefore, further advancements in this exciting field are eagerly awaited [23].

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics13132294/s1, Table S1: AI models datasets for diagnosis and subtyping; Table S2: AI models datasets for grading; Table S3: AI methods datasets for prognostic models; Table S4: AI models datasets for molecular morphologic connection and therapy response predictions.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

https://www.cancer.gov/ccg/access-data (the cancer genomic atlas program) (accessed on 31 May 2023).

Conflicts of Interest

The authors confirm that there are no conflict of interest with any financial organization regarding the material discussed in this manuscript.

Abbreviations

AI artificial intelligence
AUC area under curve
BFPS block filtering post-pruning search
ccpRCC clear cell papillary renal cell carcinoma
ccRCC clear cell renal cell carcinoma
chRCC chromophobe renal cell carcinoma
CNA copy number alteration
CNN convolutional neural network
CT computed tomography
DAG-SVM Directed Acyclic Graph Support Vector Machine
DCNN deep convoluted neural network
DFS disease free survival
DL deep learning
DNN deep neural network
EGFR Epidermal growth factor receptor
FCNN fully-connected neural network
grad-CAM gradient-weighted class activation mapping
IMDC International Metastatic Renal Cell Carcinoma Database Consortium
KRAS V-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog
LASSO Least Absolute Shrinkage and Selection Operator
lmQCM local maximum quasi-clique merging
ML machine learning
mRCC metastatic renal cell carcinoma
MRI magnetic resonance imaging
MSKCC Memorial Sloan Kettering Cancer Center
N.A. not applicable
NP normal parenchyma
NT normal tissue
OS overall survival
PFS Progression-free survival
pRCC papillary renal cell carcinoma
RCC renal cell carcinoma
ResNet residual neural network architecture
RMB renal mass biopsy
RO renal oncocytoma
SVM support vector machine
TCGA The Cancer Genome Atlas
TKI Tyrosine kinase inhibitors
UISS UCLA Integrated Staging System for renal cell carcinoma
VEGFR-TKI VEGF receptor-tyrosine kinase inhibitors
VHL Von-Hippel-Lindau tumor suppressor
WSI whole slide imaging

References

  1. Capitanio, U.; Bensalah, K.; Bex, A.; Boorjian, S.A.; Bray, F.; Coleman, J.; Gore, J.L.; Sun, M.; Wood, C.; Russo, P. Epidemiology of Renal Cell Carcinoma. Eur. Urol. 2019, 75, 74–84. [Google Scholar] [CrossRef] [PubMed]
  2. Garfield, K.; LaGrange, C.A. Renal Cell Cancer; StatPearls: Treasure Island, FL, USA, 2022. [Google Scholar]
  3. Bukavina, L.; Bensalah, K.; Bray, F.; Carlo, M.; Challacombe, B.; Karam, J.A.; Kassouf, W.; Mitchell, T.; Montironi, R.; O’Brien, T.; et al. Epidemiology of Renal Cell Carcinoma: 2022 Update. Eur. Urol. 2022, 82, 529–542. [Google Scholar] [CrossRef] [PubMed]
  4. Moch, H.; Cubilla, A.L.; Humphrey, P.A.; Reuter, V.E.; Ulbright, T.M. The 2016 WHO Classification of Tumours of the Urinary System and Male Genital Organs—Part A: Renal, Penile, and Testicular Tumours. Eur. Urol. 2016, 70, 93–105. [Google Scholar] [CrossRef] [PubMed]
  5. Cimadamore, A.; Caliò, A.; Marandino, L.; Marletta, S.; Franzese, C.; Schips, L.; Amparore, D.; Bertolo, R.; Muselaers, S.; Erdem, S.; et al. Hot topics in renal cancer pathology: Implications for clinical management. Expert Rev. Anticancer. Ther. 2022, 22, 1275–1287. [Google Scholar] [CrossRef]
  6. Fuhrman, S.A.; Lasky, L.C.; Limas, C. Prognostic significance of morphologic parameters in renal cell carcinoma. Am. J. Surg. Pathol. 1982, 6, 655–664. [Google Scholar] [CrossRef]
  7. Zhang, L.; Zha, Z.; Qu, W.; Zhao, H.; Yuan, J.; Feng, Y.; Wu, B. Tumor necrosis as a prognostic variable for the clinical outcome in patients with renal cell carcinoma: A systematic review and meta-analysis. BMC Cancer 2018, 18, 870. [Google Scholar] [CrossRef]
  8. Sun, M.; Shariat, S.F.; Cheng, C.; Ficarra, V.; Murai, M.; Oudard, S.; Pantuck, A.J.; Zigeuner, R.; Karakiewicz, P.I. Prognostic factors and predictive models in renal cell carcinoma: A contemporary review. Eur. Urol. 2011, 60, 644–661. [Google Scholar] [CrossRef]
  9. Hora, M.; Albiges, L.; Bedke, J.; Campi, R.; Capitanio, U.; Giles, R.H.; Ljungberg, B.; Marconi, L.; Klatte, T.; Volpe, A.; et al. European Association of Urology Guidelines Panel on Renal Cell Carcinoma Update on the New World Health Organization Classification of Kidney Tumours 2022: The Urologist’s Point of View. Eur. Urol. 2023, 83, 97–100. [Google Scholar] [CrossRef]
  10. Mimma, R.; Anna, C.; Matteo, B.; Gaetano, P.; Carlo, G.; Guido, M.; Camillo, P. Clinico-pathological implications of the 2022 WHO Renal Cell Carcinoma classification. Cancer Treat. Rev. 2023, 116, 102558. [Google Scholar] [CrossRef]
  11. Baidoshvili, A.; Bucur, A.; Van Leeuwen, J.; Van Der Laak, J.; Kluin, P.; Van Diest, P.J. Evaluating the benefits of digital pathology implementation: Time savings in laboratory logistics. Histopathology 2018, 73, 784–794. [Google Scholar] [CrossRef]
  12. Shmatko, A.; Ghaffari Laleh, N.; Gerstung, M.; Kather, J.N. Artificial intelligence in histopathology: Enhancing cancer research and clinical oncology. Nat. Cancer 2022, 3, 1026–1038. [Google Scholar] [CrossRef]
  13. Roussel, E.; Capitanio, U.; Kutikov, A.; Oosterwijk, E.; Pedrosa, I.; Rowe, S.P.; Gorin, M.A. Novel Imaging Methods for Renal Mass Characterization: A Collaborative Review. Eur. Urol. 2022, 81, 476–488. [Google Scholar] [CrossRef]
  14. Bera, K.; Schalper, K.A.; Rimm, D.L.; Velcheti, V.; Madabhushi, A. Artificial intelligence in digital pathology—New tools for diagnosis and precision oncology. Nat. Rev. Clin. Oncol. 2019, 16, 703–715. [Google Scholar] [CrossRef]
  15. Niazi, M.K.K.; Parwani, A.V.; Gurcan, M.N. Digital pathology and artificial intelligence. Lancet Oncol. 2019, 20, e253–e261. [Google Scholar] [CrossRef] [PubMed]
  16. Colling, R.; Pitman, H.; Oien, K.; Rajpoot, N.; Macklin, P.; CM-Path AI in Histopathology Working Group; Snead, D.; Sackville, T.; Verrill, C. Artificial intelligence in digital pathology: A roadmap to routine use in clinical practice. J. Pathol. 2019, 249, 143–150. [Google Scholar] [CrossRef] [PubMed]
  17. Glembin, M.; Obuchowski, A.; Klaudel, B.; Rydzinski, B.; Karski, R.; Syty, P.; Jasik, P.; Narożański, W.J. Enhancing Renal Tumor Detection: Leveraging Artificial Neural Networks in Computed Tomography Analysis. Med. Sci. Monit. 2023, 29, e939462. [Google Scholar] [CrossRef] [PubMed]
  18. Volpe, A.; Patard, J.J. Prognostic factors in renal cell carcinoma. World J. Urol. 2010, 28, 319–327. [Google Scholar] [CrossRef] [PubMed]
  19. Tucker, M.D.; Rini, B.I. Predicting Response to Immunotherapy in Metastatic Renal Cell Carcinoma. Cancers 2020, 12, 2662. [Google Scholar] [CrossRef]
  20. Wang, D.; Khosla, A.; Gargeya, R.; Irshad, H.; Beck, A.H.; Israel, B. Deep Learning for Identifying Metastatic Breast Cancer. arXiv 2016, arXiv:1606.05718. [Google Scholar] [CrossRef]
  21. Hayashi, Y. Black Box Nature of Deep Learning for Digital Pathology: Beyond Quantitative to Qualitative Algorithmic Performances. In Artificial Intelligence and Machine Learning for Digital Pathology; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2020; Volume 12090, pp. 95–101. [Google Scholar] [CrossRef]
  22. Komura, D.; Ishikawa, S. Machine Learning Methods for Histopathological Image Analysis. Comput. Struct. Biotechnol. J. 2018, 16, 34–42. [Google Scholar] [CrossRef]
  23. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  24. Bulten, W.; Pinckaers, H.; van Boven, H.; Vink, R.; de Bel, T.; van Ginneken, B.; van der Laak, J.; Hulsbergen-van de Kaa, C.; Litjens, G. Automated deep-learning system for Gleason grading of prostate cancer using biopsies: A diagnostic study. Lancet Oncol. 2020, 21, 233–241. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Ström, P.; Kartasalo, K.; Olsson, H.; Solorzano, L.; Delahunt, B.; Berney, D.M.; Bostwick, D.G.; Evans, A.J.; Grignon, D.J.; Humphrey, P.A.; et al. Artificial intelligence for diagnosis and grading of prostate cancer in biopsies: A population-based, diagnostic study. Lancet Oncol. 2020, 21, 222–232. [Google Scholar] [CrossRef] [PubMed]
  26. Saltz, J.; Gupta, R.; Hou, L.; Kurc, T.; Singh, P.; Nguyen, V.; Samaras, D.; Shroyer, K.R.; Zhao, T.; Batiste, R.; et al. Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images. Cell Rep. 2018, 23, 181–193.e7. [Google Scholar] [CrossRef] [Green Version]
  27. Kapil, A.; Wiestler, T.; Lanzmich, S.; Silva, A.; Steele, K.; Rebelatto, M.; Schmidt, G.; Brieu, N. DASGAN—Joint Domain Adaptation and Segmentation for the Analysis of Epithelial Regions in Histopathology PD-L1 Images. arXiv 2019, arXiv:1906.11118. [Google Scholar]
  28. Sha, L.; Osinski, B.L.; Ho, I.Y.; Tan, T.L.; Willis, C.; Weiss, H.; Beaubier, N.; Mahon, B.M.; Taxter, T.J.; Yip, S.S.F. Multi-Field-of-View Deep Learning Model Predicts Nonsmall Cell Lung Cancer Programmed Death-Ligand 1 Status from Whole-Slide Hematoxylin and Eosin Images. J. Pathol. Inform. 2019, 10, 24. [Google Scholar] [CrossRef]
  29. Creighton, C.J.; Morgan, M.; Gunaratne, P.H.; Wheeler, D.A.; Gibbs, R.A.; Gordon Robertson, A.; Chu, A.; Beroukhim, R.; Cibulskis, K.; Signoretti, S.; et al. Comprehensive molecular characterization of clear cell renal cell carcinoma. Nature 2013, 499, 43–49. [Google Scholar] [CrossRef] [Green Version]
  30. Krajewski, K.M.; Pedrosa, I. Imaging Advances in the Management of Kidney Cancer. J. Clin. Oncol. 2018, 36, 3582–3590. [Google Scholar] [CrossRef]
  31. Roussel, E.; Campi, R.; Amparore, D.; Bertolo, R.; Carbonara, U.; Erdem, S.; Ingels, A.; Kara, Ö.; Marandino, L.; Marchioni, M.; et al. Expanding the Role of Ultrasound for the Characterization of Renal Masses. J. Clin. Med. 2022, 11, 1112. [Google Scholar] [CrossRef]
  32. Shuch, B.; Hofmann, J.N.; Merino, M.J.; Nix, J.W.; Vourganti, S.; Linehan, W.M.; Schwartz, K.; Ruterbusch, J.J.; Colt, J.S.; Purdue, M.P.; et al. Pathologic validation of renal cell carcinoma histology in the Surveillance, Epidemiology, and End Results program. Urol. Oncol. Semin. Orig. Investig. 2013, 32, 23.e9–23.e13. [Google Scholar] [CrossRef] [Green Version]
  33. Al-Aynati, M.; Chen, V.; Salama, S.; Shuhaibar, H.; Treleaven, D.; Vincic, L. Interobserver and Intraobserver Variability Using the Fuhrman Grading System for Renal Cell Carcinoma. Arch. Pathol. Lab. Med. 2003, 127, 593–596. [Google Scholar] [CrossRef]
  34. Williamson, S.R.; Rao, P.; Hes, O.; Epstein, J.I.; Smith, S.C.; Picken, M.M.; Zhou, M.; Tretiakova, M.S.; Tickoo, S.K.; Chen, Y.-B.; et al. Challenges in pathologic staging of renal cell carcinoma: A study of interobserver variability among urologic pathologists. Am. J. Surg. Pathol. 2018, 42, 1253–1261. [Google Scholar] [CrossRef] [Green Version]
  35. Gavrielides, M.A.; Gallas, B.D.; Lenz, P.; Badano, A.; Hewitt, S.M. Observer variability in the interpretation of HER2/neu immunohistochemical expression with unaided and computer-aided digital microscopy. Arch. Pathol. Lab. Med. 2011, 135, 233–242. [Google Scholar] [CrossRef]
  36. Ficarra, V.; Martignoni, G.; Galfano, A.; Novara, G.; Gobbo, S.; Brunelli, M.; Pea, M.; Zattoni, F.; Artibani, W. Prognostic Role of the Histologic Subtypes of Renal Cell Carcinoma after Slide Revision. Eur. Urol. 2006, 50, 786–794. [Google Scholar] [CrossRef]
  37. Lang, H.; Lindner, V.; de Fromont, M.; Molinié, V.; Letourneux, H.; Meyer, N.; Martin, M.; Jacqmin, D. Multicenter determination of optimal interobserver agreement using the Fuhrman grading system for renal cell carcinoma. Cancer 2004, 103, 625–629. Available online: https://acsjournals.onlinelibrary.wiley.com/doi/full/10.1002/cncr.20812 (accessed on 1 February 2023). [CrossRef] [PubMed]
  38. Smaldone, M.C.; Egleston, B.; Hollingsworth, J.M.; Hollenbeck, B.K.; Miller, D.C.; Morgan, T.M.; Kim, S.P.; Malhotra, A.; Handorf, E.; Wong, Y.-N.; et al. Understanding Treatment Disconnect and Mortality Trends in Renal Cell Carcinoma Using Tumor Registry Data. Med. Care 2017, 55, 398–404. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Kutikov, A.; Smaldone, M.C.; Egleston, B.L.; Manley, B.J.; Canter, D.J.; Simhan, J.; Boorjian, S.A.; Viterbo, R.; Chen, D.Y.; Greenberg, R.E.; et al. Anatomic Features of Enhancing Renal Masses Predict Malignant and High-Grade Pathology: A Preoperative Nomogram Using the RENAL Nephrometry Score. Eur. Urol. 2011, 60, 241–248. [Google Scholar] [CrossRef] [Green Version]
  40. Pierorazio, P.M.; Patel, H.D.; Johnson, M.H.; Sozio, S.; Sharma, R.; Iyoha, E.; Bass, E.; Allaf, M.E. Distinguishing malignant and benign renal masses with composite models and nomograms: A systematic review and meta-analysis of clinically localized renal masses suspicious for malignancy. Cancer 2016, 122, 3267–3276. [Google Scholar] [CrossRef]
  41. Joshi, S.; Kutikov, A. Understanding Mutational Drivers of Risk: An Important Step Toward Personalized Care for Patients with Renal Cell Carcinoma. Eur. Urol. Focus 2016, 3, 428–429. [Google Scholar] [CrossRef] [PubMed]
  42. Nguyen, M.M.; Gill, I.S.; Ellison, L.M. The Evolving Presentation of Renal Carcinoma in the United States: Trends From the Surveillance, Epidemiology, and End Results Program. J. Urol. 2006, 176, 2397–2400. [Google Scholar] [CrossRef] [PubMed]
  43. Sohlberg, E.M.; Metzner, T.J.; Leppert, J.T. The Harms of Overdiagnosis and Overtreatment in Patients with Small Renal Masses: A Mini-review. Eur. Urol. Focus 2019, 5, 943–945. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Campi, R.; Stewart, G.D.; Staehler, M.; Dabestani, S.; Kuczyk, M.A.; Shuch, B.M.; Finelli, A.; Bex, A.; Ljungberg, B.; Capitanio, U. Novel Liquid Biomarkers and Innovative Imaging for Kidney Cancer Diagnosis: What Can Be Implemented in Our Practice Today? A Systematic Review of the Literature. Eur. Urol. Oncol. 2021, 4, 22–41. [Google Scholar] [CrossRef]
  45. Warren, H.; Palumbo, C.; Caliò, A.; Tran, M.G.B.; Campi, R.; European Association of Urology (EAU) Young Academic Urologists (YAU) Renal Cancer Working Group. Oncocytoma on renal mass biopsy: Why is surgery even performed? World J. Urol. 2023, 41, 1709–1710. [Google Scholar] [CrossRef] [PubMed]
  46. Kutikov, A.; Smaldone, M.C.; Uzzo, R.G.; Haifler, M.; Bratslavsky, G.; Leibovich, B.C. Renal Mass Biopsy: Always, Sometimes, or Never? Eur. Urol. 2016, 70, 403–406. [Google Scholar] [CrossRef] [PubMed]
  47. Lane, B.R.; Samplaski, M.K.; Herts, B.R.; Zhou, M.; Novick, A.C.; Campbell, S.C. Renal Mass Biopsy—A Renaissance? J. Urol. 2008, 179, 20–27. [Google Scholar] [CrossRef] [PubMed]
  48. Sinks, A.; Miller, C.; Holck, H.; Zeng, L.; Gaston, K.; Riggs, S.; Matulay, J.; Clark, P.E.; Roy, O. Renal Mass Biopsy Mandate Is Associated With Change in Treatment Decisions. J. Urol. 2023, 210, 72–78. [Google Scholar] [CrossRef]
  49. Marconi, L.; Dabestani, S.; Lam, T.B.; Hofmann, F.; Stewart, F.; Norrie, J.; Bex, A.; Bensalah, K.; Canfield, S.E.; Hora, M.; et al. Systematic Review and Meta-analysis of Diagnostic Accuracy of Percutaneous Renal Tumour Biopsy. Eur. Urol. 2016, 69, 660–673. [Google Scholar] [CrossRef]
  50. Evans, A.J.; Delahunt, B.; Srigley, J.R. Issues and challenges associated with classifying neoplasms in percutaneous needle biopsies of incidentally found small renal masses. Semin. Diagn. Pathol. 2015, 32, 184–195. [Google Scholar] [CrossRef]
  51. Kümmerlin, I.; ten Kate, F.; Smedts, F.; Horn, T.; Algaba, F.; Trias, I.; de la Rosette, J.; Laguna, M.P. Core biopsies of renal tumors: A study on diagnostic accuracy, interobserver, and intraobserver variability. Eur. Urol. 2008, 53, 1219–1227. [Google Scholar] [CrossRef]
  52. Elmore, J.G.; Longton, G.M.; Carney, P.A.; Geller, B.M.; Onega, T.; Tosteson, A.N.A.; Nelson, H.D.; Pepe, M.S.; Allison, K.H.; Schnitt, S.J.; et al. Diagnostic concordance among pathologists interpreting breast biopsy specimens. JAMA 2015, 313, 1122–1132. [Google Scholar] [CrossRef] [Green Version]
  53. Elmore, J.G.; Barnhill, R.L.; Elder, D.E.; Longton, G.M.; Pepe, M.S.; Reisch, L.M.; Carney, P.A.; Titus, L.J.; Nelson, H.D.; Onega, T.; et al. Pathologists’ diagnosis of invasive melanoma and melanocytic proliferations: Observer accuracy and reproducibility study. BMJ 2017, 357, j2813. [Google Scholar] [CrossRef] [Green Version]
  54. Shah, M.D.; Parwani, A.V.; Zynger, D.L. Impact of the Pathologist on Prostate Biopsy Diagnosis and Immunohistochemical Stain Usage Within a Single Institution. Am. J. Clin. Pathol. 2017, 148, 494–501. [Google Scholar] [CrossRef] [Green Version]
  55. Fenstermaker, M.; Tomlins, S.A.; Singh, K.; Wiens, J.; Morgan, T.M. Development and Validation of a Deep-learning Model to Assist With Renal Cell Carcinoma Histopathologic Interpretation. Urology 2020, 144, 152–157. [Google Scholar] [CrossRef] [PubMed]
  56. van Oostenbrugge, T.J.; Fütterer, J.J.; Mulders, P.F. Diagnostic Imaging for Solid Renal Tumors: A Pictorial Review. Kidney Cancer 2018, 2, 79–93. [Google Scholar] [CrossRef] [PubMed]
  57. Williams, G.M.; Lynch, D.T. Renal Oncocytoma; StatPearls: Treasure Island, FL, USA, 2022. [Google Scholar]
  58. Leone, A.R.; Kidd, L.C.; Diorio, G.J.; Zargar-Shoshtari, K.; Sharma, P.; Sexton, W.J.; Spiess, P.E. Bilateral benign renal oncocytomas and the role of renal biopsy: Single institution review. BMC Urol. 2017, 17, 6. [Google Scholar] [CrossRef] [Green Version]
  59. Zhu, M.; Ren, B.; Richards, R.; Suriawinata, M.; Tomita, N.; Hassanpour, S. Development and evaluation of a deep neural network for histologic classification of renal cell carcinoma on biopsy and surgical resection slides. Sci. Rep. 2021, 11, 7080. [Google Scholar] [CrossRef]
  60. Volpe, A.; Mattar, K.; Finelli, A.; Kachura, J.R.; Evans, A.J.; Geddie, W.R.; Jewett, M.A. Contemporary results of percutaneous biopsy of 100 small renal masses: A single center experience. J. Urol. 2008, 180, 2333–2337. [Google Scholar] [CrossRef]
  61. Wang, R.; Wolf, J.S.; Wood, D.P.; Higgins, E.J.; Hafez, K.S. Accuracy of Percutaneous Core Biopsy in Management of Small Renal Masses. Urology 2009, 73, 586–590. [Google Scholar] [CrossRef]
  62. Barwari, K.; de la Rosette, J.J.; Laguna, M.P. The penetration of renal mass biopsy in daily practice: A survey among urologists. J. Endourol. 2012, 26, 737–747. [Google Scholar] [CrossRef] [PubMed]
  63. Escudier, B. Emerging immunotherapies for renal cell carcinoma. Ann. Oncol. 2012, 23, viii35–viii40. [Google Scholar] [CrossRef]
  64. Yanagisawa, T.; Schmidinger, M.; Fajkovic, H.; Karakiewicz, P.I.; Kimura, T.; Shariat, S.F. What is the role of cytoreductive nephrectomy in patients with metastatic renal cell carcinoma? Expert Rev. Anticancer Ther. 2023, 23, 455–459. [Google Scholar] [CrossRef]
  65. Bertolo, R.; Pecoraro, A.; Carbonara, U.; Amparore, D.; Diana, P.; Muselaers, S.; Marchioni, M.; Mir, M.C.; Antonelli, A.; Badani, K.; et al. Resection Techniques During Robotic Partial Nephrectomy: A Systematic Review. Eur. Urol. Open Sci. 2023, 52, 7–21. [Google Scholar] [CrossRef]
  66. Tabibu, S.; Vinod, P.K.; Jawahar, C.V. Pan-Renal Cell Carcinoma classification and survival prediction from histopathology images using deep learning. Sci. Rep. 2019, 9, 10509. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Chen, S.; Zhang, N.; Jiang, L.; Gao, F.; Shao, J.; Wang, T.; Zhang, E.; Yu, H.; Wang, X.; Zheng, J. Clinical use of a machine learning histopathological image signature in diagnosis and survival prediction of clear cell renal cell carcinoma. Int. J. Cancer 2020, 148, 780–790. [Google Scholar] [CrossRef] [PubMed]
  68. Marostica, E.; Barber, R.; Denize, T.; Kohane, I.S.; Signoretti, S.; Golden, J.A.; Yu, K.-H. Development of a Histopathology Informatics Pipeline for Classification and Prediction of Clinical Outcomes in Subtypes of Renal Cell Carcinoma. Clin. Cancer Res. 2021, 27, 2868–2878. [Google Scholar] [CrossRef] [PubMed]
  69. Pathology Outlines—WHO Classification. Available online: https://www.pathologyoutlines.com/topic/kidneytumorWHOclass.html (accessed on 24 January 2023).
  70. Cimadamore, A.; Cheng, L.; Scarpelli, M.; Massari, F.; Mollica, V.; Santoni, M.; Lopez-Beltran, A.; Montironi, R.; Moch, H. Towards a new WHO classification of renal cell tumor: What the clinician needs to know—A narrative review. Transl. Androl. Urol. 2021, 10, 1506–1520. [Google Scholar] [CrossRef]
  71. Weng, S.; DiNatale, R.G.; Silagy, A.; Mano, R.; Attalla, K.; Kashani, M.; Weiss, K.; Benfante, N.E.; Winer, A.G.; Coleman, J.A.; et al. The Clinicopathologic and Molecular Landscape of Clear Cell Papillary Renal Cell Carcinoma: Implications in Diagnosis and Management. Eur. Urol. 2020, 79, 468–477. [Google Scholar] [CrossRef]
  72. Williamson, S.R.; Eble, J.N.; Cheng, L.; Grignon, D.J. Clear cell papillary renal cell carcinoma: Differential diagnosis and extended immunohistochemical profile. Mod. Pathol. 2013, 26, 697–708. [Google Scholar] [CrossRef] [Green Version]
  73. Abdeltawab, H.A.; Khalifa, F.A.; Ghazal, M.A.; Cheng, L.; El-Baz, A.S.; Gondim, D.D. A deep learning framework for automated classification of histopathological kidney whole-slide images. J. Pathol. Inform. 2022, 13, 100093. [Google Scholar] [CrossRef]
  74. Faust, K.; Roohi, A.; Leon, A.J.; Leroux, E.; Dent, A.; Evans, A.J.; Pugh, T.J.; Kalimuthu, S.N.; Djuric, U.; Diamandis, P. Unsupervised Resolution of Histomorphologic Heterogeneity in Renal Cell Carcinoma Using a Brain Tumor–Educated Neural Network. JCO Clin. Cancer Inform. 2020, 4, 811–821. [Google Scholar] [CrossRef]
  75. Renal Cell Carcinoma EAU Guidelines on 2022. Available online: https://uroweb.org/guidelines/renal-cell-carcinoma (accessed on 1 February 2023).
  76. Gelb, A.B. Communication Union Internationale Contre le Cancer (UICC) and the American Joint Renal Cell Carcinoma Committee on Cancer (AJCC) Current Prognostic Factors BACKGROUND. Renal Cell Carcinomas Include Several Distinct Entities with a Range. Available online: 10.1002/(sici)1097-0142(19970901)80:5<994::aid-cncr27>3.0.co;2-q (accessed on 1 February 2023).
  77. Beksac, A.T.; Paulucci, D.J.; Blum, K.A.; Yadav, S.S.; Sfakianos, J.P.; Badani, K.K. Heterogeneity in renal cell carcinoma. Urol. Oncol. Semin. Orig. Investig. 2017, 35, 507–515. [Google Scholar] [CrossRef] [PubMed]
  78. Dall’Oglio, M.F.; Ribeiro-Filho, L.A.; Antunes, A.A.; Crippa, A.; Nesrallah, L.; Gonçalves, P.D.; Leite, K.R.M.; Srougi, M. Microvascular Tumor Invasion, Tumor Size and Fuhrman Grade: A Pathological Triad for Prognostic Evaluation of Renal Cell Carcinoma. J. Urol. 2007, 178, 425–428. [Google Scholar] [CrossRef] [PubMed]
  79. Tsui, K.-H.; Shvarts, O.; Smith, R.B.; Figlin, R.A.; Dekernion, J.B.; Belldegrun, A. Prognostic indicators for renal cell carcinoma: A multivariate analysis of 643 patients using the revised 1997 tnm staging criteria. J. Urol. 2000, 163, 1090–1095. [Google Scholar] [CrossRef] [PubMed]
  80. Ficarra, V.; Righetti, R.; Pilloni, S.; D’amico, A.; Maffei, N.; Novella, G.; Zanolla, L.; Malossini, G.; Mobilio, G. Prognostic Factors in Patients with Renal Cell Carcinoma: Retrospective Analysis of 675 Cases. Eur. Urol. 2002, 41, 190–198. [Google Scholar] [CrossRef]
  81. Scopus Preview—Scopus—Document Details—Prognostic Significance of Morphologic Parameters in Renal Cell Carcinoma. Available online: https://www.scopus.com/record/display.uri?eid=2-s2.0-2642552183&origin=inward&txGid=18f4bff1afabc920febe75bb222fbbab (accessed on 18 January 2023).
  82. Prognostic Value of Nuclear Grade of Renal Cell Carcinoma. Available online: https://acsjournals.onlinelibrary.wiley.com/doi/epdf/10.1002/1097-0142(19951215)76:12%3C2543::AID-CNCR2820761221%3E3.0.CO;2-S?src=getftr (accessed on 18 January 2023).
  83. Bektas, S.; Bahadir, B.; Kandemir, N.O.; Barut, F.; Gul, A.E.; Ozdamar, S.O. Intraobserver and Interobserver Variability of Fuhrman and Modified Fuhrman Grading Systems for Conventional Renal Cell Carcinoma. Kaohsiung J. Med. Sci. 2009, 25, 596–600. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1016/S1607-551X(09)70562-5 (accessed on 18 January 2023). [CrossRef] [Green Version]
  84. Yeh, F.-C.; Parwani, A.V.; Pantanowitz, L.; Ho, C. Automated grading of renal cell carcinoma using whole slide imaging. J. Pathol. Inform. 2014, 5, 23. [Google Scholar] [CrossRef]
  85. Paner, G.P.; Stadler, W.M.; Hansel, D.E.; Montironi, R.; Lin, D.W.; Amin, M.B. Updates in the Eighth Edition of the Tumor-Node-Metastasis Staging Classification for Urologic Cancers. Eur. Urol. 2018, 73, 560–569. [Google Scholar] [CrossRef]
  86. Holdbrook, D.A.; Singh, M.; Choudhury, Y.; Kalaw, E.M.; Koh, V.; Tan, H.S.; Kanesvaran, R.; Tan, P.H.; Peng, J.Y.S.; Tan, M.-H.; et al. Automated Renal Cancer Grading Using Nuclear Pleomorphic Patterns. JCO Clin. Cancer Inform. 2018, 2, 1–12. [Google Scholar] [CrossRef]
  87. Qayyum, T.; McArdle, P.; Orange, C.; Seywright, M.; Horgan, P.; Oades, G.; Aitchison, M.; Edwards, J. Reclassification of the Fuhrman grading system in renal cell carcinoma-does it make a difference? SpringerPlus 2013, 2, 378. [Google Scholar] [CrossRef] [Green Version]
  88. Tian, K.; Rubadue, C.A.; Lin, D.; Veta, M.; Pyle, M.E.; Irshad, H.; Heng, Y.J. Automated clear cell renal carcinoma grade classification with prognostic significance. PLoS ONE 2019, 14, e0222641. [Google Scholar] [CrossRef] [Green Version]
  89. Song, J.; Xiao, L.; Lian, Z. Contour-Seed Pairs Learning-Based Framework for Simultaneously Detecting and Segmenting Various Overlapping Cells/Nuclei in Microscopy Images. IEEE Trans. Image Process. 2018, 27, 5759–5774. [Google Scholar] [CrossRef]
  90. Arjumand, W.; Sultana, S. Role of VHL gene mutation in human renal cell carcinoma. Tumor Biol. 2012, 33, 9–16. Available online: https://link.springer.com/article/10.1007/s13277-011-0257-3 (accessed on 18 January 2023). [CrossRef] [PubMed]
  91. Nogueira, M.; Kim, H.L. Molecular markers for predicting prognosis of renal cell carcinoma. Urol. Oncol. Semin. Orig. Investig. 2008, 26, 113–124. [Google Scholar] [CrossRef] [PubMed]
  92. Roussel, E.; Beuselinck, B.; Albersen, M. Tailoring treatment in metastatic renal cell carcinoma. Nat. Rev. Urol. 2022, 19, 455–456. [Google Scholar] [CrossRef] [PubMed]
  93. Funakoshi, T.; Lee, C.-H.; Hsieh, J.J. A systematic review of predictive and prognostic biomarkers for VEGF-targeted therapy in renal cell carcinoma. Cancer Treat. Rev. 2014, 40, 533–547. [Google Scholar] [CrossRef]
  94. Rodriguez-Vida, A.; Strijbos, M.; Hutson, T.X. Predictive and prognostic biomarkers of targeted agents and modern immunotherapy in renal cell carcinoma. ESMO Open 2016, 1, e000013. [Google Scholar] [CrossRef] [Green Version]
  95. Motzer, R.J.; Robbins, P.B.; Powles, T.; Albiges, L.; Haanen, J.B.; Larkin, J.; Mu, X.J.; Ching, K.A.; Uemura, M.; Pal, S.K.; et al. Avelumab plus axitinib versus sunitinib in advanced renal cell carcinoma: Biomarker analysis of the phase 3 JAVELIN Renal 101 trial. Nat. Med. 2020, 26, 1733–1741. [Google Scholar] [CrossRef]
  96. Schimmel, H.; Zegers, I.; Emons, H. Standardization of protein biomarker measurements: Is it feasible? Scand. J. Clin. Lab. Investig. 2010, 70, 27–33. [Google Scholar] [CrossRef]
  97. Mayeux, R. Biomarkers: Potential uses and limitations. NeuroRx 2004, 1, 182–188. [Google Scholar] [CrossRef]
  98. Singh, N.P.; Bapi, R.S.; Vinod, P. Machine learning models to predict the progression from early to late stages of papillary renal cell carcinoma. Comput. Biol. Med. 2018, 100, 92–99. [Google Scholar] [CrossRef]
  99. Bhalla, S.; Chaudhary, K.; Kumar, R.; Sehgal, M.; Kaur, H.; Sharma, S.; Raghava, G.P.S. Gene expression-based biomarkers for discriminating early and late stage of clear cell renal cancer. Sci. Rep. 2017, 7, srep44997. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Fernandes, F.G.; Silveira, H.C.S.; Júnior, J.N.A.; da Silveira, R.A.; Zucca, L.E.; Cárcano, F.M.; Sanches, A.O.N.; Neder, L.; Scapulatempo-Neto, C.; Serrano, S.V.; et al. Somatic Copy Number Alterations and Associated Genes in Clear-Cell Renal-Cell Carcinoma in Brazilian Patients. Int. J. Mol. Sci. 2021, 22, 2265. [Google Scholar] [CrossRef] [PubMed]
  101. D’Avella, C.; Abbosh, P.; Pal, S.K.; Geynisman, D.M. Mutations in renal cell carcinoma. Urol. Oncol. Semin. Orig. Investig. 2018, 38, 763–773. [Google Scholar] [CrossRef] [PubMed]
  102. Havel, J.J.; Chowell, D.; Chan, T.A. The evolving landscape of biomarkers for checkpoint inhibitor immunotherapy. Nat. Rev. Cancer 2019, 19, 133–150. [Google Scholar] [CrossRef] [PubMed]
  103. Farrukh, M.; Ali, M.A.; Naveed, M.; Habib, R.; Khan, H.; Kashif, T.; Zubair, H.; Saeed, M.; Butt, S.K.; Niaz, R.; et al. Efficacy and Safety of Checkpoint Inhibitors in Clear Cell Renal Cell Carcinoma: A Systematic Review of Clinical Trials. Hematol. Oncol. Stem. Cell Ther. 2023, 16, 170–185. [Google Scholar] [CrossRef] [PubMed]
  104. Go, H.; Kang, M.J.; Kim, P.-J.; Lee, J.-L.; Park, J.Y.; Park, J.-M.; Ro, J.Y.; Cho, Y.M. Development of Response Classifier for Vascular Endothelial Growth Factor Receptor (VEGFR)-Tyrosine Kinase Inhibitor (TKI) in Metastatic Renal Cell Carcinoma. Pathol. Oncol. Res. 2017, 25, 51–58. [Google Scholar] [CrossRef]
  105. Padmanabhan, R.K.; Somasundar, V.H.; Griffith, S.D.; Zhu, J.; Samoyedny, D.; Tan, K.S.; Hu, J.; Liao, X.; Carin, L.; Yoon, S.S.; et al. An Active Learning Approach for Rapid Characterization of Endothelial Cells in Human Tumors. PLoS ONE 2014, 9, e90495. [Google Scholar] [CrossRef]
  106. Ing, N.; Huang, F.; Conley, A.; You, S.; Ma, Z.; Klimov, S.; Ohe, C.; Yuan, X.; Amin, M.B.; Figlin, R.; et al. A novel machine learning approach reveals latent vascular phenotypes predictive of renal cancer outcome. Sci. Rep. 2017, 7, 13190. [Google Scholar] [CrossRef] [Green Version]
  107. Herman, J.G.; Latif, F.; Weng, Y.; Lerman, M.I.; Zbar, B.; Liu, S.; Samid, D.; Duan, D.S.; Gnarra, J.R.; Linehan, W.M. Silencing of the VHL tumor-suppressor gene by DNA methylation in renal carcinoma. Proc. Natl. Acad. Sci. USA 1994, 91, 9700–9704. [Google Scholar] [CrossRef]
  108. Yamana, K.; Ohashi, R.; Tomita, Y. Contemporary Drug Therapy for Renal Cell Carcinoma—Evidence Accumulation and Histological Implications in Treatment Strategy. Biomedicines 2022, 10, 2840. [Google Scholar] [CrossRef]
  109. Zhu, L.; Wang, J.; Kong, W.; Huang, J.; Dong, B.; Huang, Y.; Xue, W.; Zhang, J. LSD1 inhibition suppresses the growth of clear cell renal cell carcinoma via upregulating P21 signaling. Acta Pharm. Sin. B 2018, 9, 324–334. [Google Scholar] [CrossRef] [PubMed]
  110. Chen, W.; Zhang, H.; Chen, Z.; Jiang, H.; Liao, L.; Fan, S.; Xing, J.; Xie, Y.; Chen, S.; Ding, H.; et al. Development and evaluation of a novel series of Nitroxoline-derived BET inhibitors with antitumor activity in renal cell carcinoma. Oncogenesis 2018, 7, 83. [Google Scholar] [CrossRef] [Green Version]
  111. Joosten, S.C.; Smits, K.M.; Aarts, M.J.; Melotte, V.; Koch, A.; Tjan-Heijnen, V.C.; Van Engeland, M. Epigenetics in renal cell cancer: Mechanisms and clinical applications. Nat. Rev. Urol. 2018, 15, 430–451. [Google Scholar] [CrossRef]
  112. Zheng, H.; Momeni, A.; Cedoz, P.-L.; Vogel, H.; Gevaert, O. Whole slide images reflect DNA methylation patterns of human tumors. npj Genom. Med. 2020, 5, 11. [Google Scholar] [CrossRef]
  113. Singh, N.P.; Vinod, P.K. Integrative analysis of DNA methylation and gene expression in papillary renal cell carcinoma. Mol. Genet. Genom. 2020, 295, 807–824. [Google Scholar] [CrossRef] [PubMed]
  114. Guida, A.; Le Teuff, G.; Alves, C.; Colomba, E.; Di Nunno, V.; Derosa, L.; Flippot, R.; Escudier, B.; Albiges, L. Identification of international metastatic renal cell carcinoma database consortium (IMDC) intermediate-risk subgroups in patients with metastatic clear-cell renal cell carcinoma. Oncotarget 2020, 11, 4582–4592. [Google Scholar] [CrossRef] [PubMed]
  115. Zigeuner, R.; Hutterer, G.; Chromecki, T.; Imamovic, A.; Kampel-Kettner, K.; Rehak, P.; Langner, C.; Pummer, K. External validation of the Mayo Clinic stage, size, grade, and necrosis (SSIGN) score for clear-cell renal cell carcinoma in a single European centre applying routine pathology. Eur. Urol. 2010, 57, 102–111. [Google Scholar] [CrossRef]
  116. Prediction of Progression after Radical Nephrectomy for Patients with Clear Cell Renal Cell Carcinoma: A Stratification Tool for Prospective Clinical Trials—PubMed n.d. Available online: https://pubmed.ncbi.nlm.nih.gov/12655523/ (accessed on 5 March 2023).
  117. Zisman, A.; Pantuck, A.J.; Dorey, F.; Said, J.W.; Shvarts, O.; Quintana, D.; Gitlitz, B.J.; Dekernion, J.B.; Figlin, R.A.; Belldegrun, A.S. Improved prognostication of renal cell carcinoma using an integrated staging system. J. Clin. Oncol. 2001, 19, 1649–1657. [Google Scholar] [CrossRef]
  118. Lubbock, A.L.R.; Stewart, G.D.; O’mahony, F.C.; Laird, A.; Mullen, P.; O’donnell, M.; Powles, T.; Harrison, D.J.; Overton, I.M. Overcoming intratumoural heterogeneity for reproducible molecular risk stratification: A case study in advanced kidney cancer. BMC Med. 2017, 15, 118. [Google Scholar] [CrossRef] [Green Version]
  119. Heng, D.Y.; Xie, W.; Regan, M.M.; Harshman, L.C.; Bjarnason, G.A.; Vaishampayan, U.N.; Mackenzie, M.; Wood, L.; Donskov, F.; Tan, M.-H.; et al. External validation and comparison with other models of the International Metastatic Renal-Cell Carcinoma Database Consortium prognostic model: A population-based study. Lancet Oncol. 2013, 14, 141–148. [Google Scholar] [CrossRef] [Green Version]
  120. Erdem, S.; Capitanio, U.; Campi, R.; Mir, M.C.; Roussel, E.; Pavan, N.; Kara, O.; Klatte, T.; Kriegmair, M.C.; Degirmenci, E.; et al. External validation of the VENUSS prognostic model to predict recurrence after surgery in non-metastatic papillary renal cell carcinoma: A multi-institutional analysis. Urol. Oncol. Semin. Orig. Investig. 2022, 40, 198.e9–198.e17. [Google Scholar] [CrossRef]
  121. Di Nunno, V.; Mollica, V.; Schiavina, R.; Nobili, E.; Fiorentino, M.; Brunocilla, E.; Ardizzoni, A.; Massari, F. Improving IMDC Prognostic Prediction Through Evaluation of Initial Site of Metastasis in Patients With Metastatic Renal Cell Carcinoma. Clin. Genitourin. Cancer 2020, 18, e83–e90. [Google Scholar] [CrossRef]
  122. Huang, S.-C.; Pareek, A.; Seyyedi, S.; Banerjee, I.; Lungren, M.P. Fusion of medical imaging and electronic health records using deep learning: A systematic review and implementation guidelines. npj Digit. Med. 2020, 3, 136. [Google Scholar] [CrossRef] [PubMed]
  123. Wessels, F.; Schmitt, M.; Krieghoff-Henning, E.; Kather, J.N.; Nientiedt, M.; Kriegmair, M.C.; Worst, T.S.; Neuberger, M.; Steeg, M.; Popovic, Z.V.; et al. Deep learning can predict survival directly from histology in clear cell renal cell carcinoma. PLoS ONE 2022, 17, e0272656. [Google Scholar] [CrossRef]
  124. Chen, S.; Jiang, L.; Gao, F.; Zhang, E.; Wang, T.; Zhang, N.; Wang, X.; Zheng, J. Machine learning-based pathomics signature could act as a novel prognostic marker for patients with clear cell renal cell carcinoma. Br. J. Cancer 2021, 126, 771–777. [Google Scholar] [CrossRef] [PubMed]
  125. Cheng, J.; Zhang, J.; Han, Y.; Wang, X.; Ye, X.; Meng, Y.; Parwani, A.; Han, Z.; Feng, Q.; Huang, K. Integrative Analysis of Histopathological Images and Genomic Data Predicts Clear Cell Renal Cell Carcinoma Prognosis. Cancer Res. 2017, 77, e91–e100. [Google Scholar] [CrossRef]
  126. Ning, Z.; Pan, W.; Chen, Y.; Xiao, Q.; Zhang, X.; Luo, J.; Wang, J.; Zhang, Y. Integrative analysis of cross-modal features for the prognosis prediction of clear cell renal cell carcinoma. Bioinformatics 2020, 36, 2888–2895. [Google Scholar] [CrossRef]
  127. Schulz, S.; Woerl, A.-C.; Jungmann, F.; Glasner, C.; Stenzel, P.; Strobl, S.; Fernandez, A.; Wagner, D.-C.; Haferkamp, A.; Mildenberger, P.; et al. Multimodal Deep Learning for Prognosis Prediction in Renal Cancer. Front. Oncol. 2021, 11, 788740. [Google Scholar] [CrossRef] [PubMed]
  128. Khene, Z.; Kutikov, A.; Campi, R.; the EAU-YAU Renal Cancer Working Group. Machine learning in renal cell carcinoma research: The promise and pitfalls of ‘renal-izing’ the potential of artificial intelligence. BJU Int. 2023. [Google Scholar] [CrossRef] [PubMed]
  129. Wu, Z.; Carbonara, U.; Campi, R. Re: Criteria for the Translation of Radiomics into Clinically Useful Tests. Eur. Urol. 2023, 84, 142–143. [Google Scholar] [CrossRef]
  130. Shortliffe, E.H.; Sepúlveda, M.J. Clinical Decision Support in the Era of Artificial Intelligence. JAMA 2018, 320, 2199–2200. [Google Scholar] [CrossRef] [PubMed]
  131. Durán, J.M.; Jongsma, K.R. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J. Med. Ethic 2021, 47, 329–335. [Google Scholar] [CrossRef] [PubMed]
  132. Teo, Y.Y.; Danilevsky, A.; Shomron, N. Overcoming Interpretability in Deep Learning Cancer Classification. Methods Mol. Biol. 2021, 2243, 297–309. [Google Scholar] [CrossRef] [PubMed]
  133. Das, S.; Moore, T.; Wong, W.-K.; Stumpf, S.; Oberst, I.; McIntosh, K.; Burnett, M. End-user feature labeling: Supervised and semi-supervised approaches based on locally-weighted logistic regression. Artif. Intell. 2013, 204, 56–74. [Google Scholar] [CrossRef] [Green Version]
  134. Krzywinski, M.; Altman, N. Points of significance: Power and sample size. Nat. Methods 2013, 10, 1139–1140. [Google Scholar] [CrossRef]
  135. Button, K.S.; Ioannidis, J.P.A.; Mokrysz, C.; Nosek, B.A.; Flint, J.; Robinson, E.S.J.; Munafò, M.R. Power failure: Why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 2013, 14, 365–376. [Google Scholar] [CrossRef] [Green Version]
  136. Wang, L. Heterogeneous Data and Big Data Analytics. Autom. Control Inf. Sci. 2017, 3, 8–15. [Google Scholar] [CrossRef] [Green Version]
  137. Borodinov, N.; Neumayer, S.; Kalinin, S.V.; Ovchinnikova, O.S.; Vasudevan, R.K.; Jesse, S. Deep neural networks for understanding noisy data applied to physical property extraction in scanning probe microscopy. npj Comput. Mater. 2019, 5, 25. [Google Scholar] [CrossRef] [Green Version]
  138. Bin Goh, W.W.; Wong, L. Dealing with Confounders in Omics Analysis. Trends Biotechnol. 2018, 36, 488–498. [Google Scholar] [CrossRef]
  139. Tougui, I.; Jilbab, A.; El Mhamdi, J. Impact of the Choice of Cross-Validation Techniques on the Results of Machine Learning-Based Diagnostic Applications. Health Inform. Res. 2021, 27, 189–199. [Google Scholar] [CrossRef]
  140. Veta, M.; Heng, Y.J.; Stathonikos, N.; Bejnordi, B.E.; Beca, F.; Wollmann, T.; Rohr, K.; Shah, M.A.; Wang, D.; Rousson, M.; et al. Predicting breast tumor proliferation from whole-slide images: The TUPAC16 challenge. Med. Image Anal. 2019, 54, 111–121. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  141. Sirinukunwattana, K.; Pluim, J.P.; Chen, H.; Qi, X.; Heng, P.-A.; Guo, Y.B.; Wang, L.Y.; Matuszewski, B.J.; Bruni, E.; Sanchez, U.; et al. Gland segmentation in colon histology images: The glas challenge contest. Med. Image Anal. 2017, 35, 489–502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  142. Yagi, Y. Color standardization and optimization in Whole Slide Imaging. Diagn. Pathol. 2011, 6, S15. [Google Scholar] [CrossRef] [Green Version]
  143. Tellez, D.; Litjens, G.; Bándi, P.; Bulten, W.; Bokhorst, J.-M.; Ciompi, F.; van der Laak, J. Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Med. Image Anal. 2019, 58, 101544. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  144. Ho, S.Y.; Phua, K.; Wong, L.; Bin Goh, W.W. Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability. Patterns 2020, 1, 100129. [Google Scholar] [CrossRef] [PubMed]
  145. Mühlbauer, J.; Egen, L.; Kowalewski, K.-F.; Grilli, M.; Walach, M.T.; Westhoff, N.; Nuhn, P.; Laqua, F.C.; Baessler, B.; Kriegmair, M.C. Radiomics in Renal Cell Carcinoma—A Systematic Review and Meta-Analysis. Cancers 2021, 13, 1348. [Google Scholar] [CrossRef]
  146. Lu, C.; Shiradkar, R.; Liu, Z. Integrating pathomics with radiomics and genomics for cancer prognosis: A brief review. Chin. J. Cancer Res. 2021, 33, 563–573. [Google Scholar] [CrossRef]
Figure 1. Pathway for the development of pathomics algorithms. After the sample is via by surgical resection or biopsy, the WSI is created and derived patches utilized through a digital scanner to train the algorithm to define diagnostic, prognostic, or predictive models. Supervised learning-based algorithms could carry the “black box” issue (see Section 6).
Figure 1. Pathway for the development of pathomics algorithms. After the sample is via by surgical resection or biopsy, the WSI is created and derived patches utilized through a digital scanner to train the algorithm to define diagnostic, prognostic, or predictive models. Supervised learning-based algorithms could carry the “black box” issue (see Section 6).
Diagnostics 13 02294 g001
Figure 2. Challenges in clinical translation after the development of a new ML algorithm.
Figure 2. Challenges in clinical translation after the development of a new ML algorithm.
Diagnostics 13 02294 g002
Table 2. Overview of studies on AI models for RCC grading.
Table 2. Overview of studies on AI models for RCC grading.
GroupAimNumber of PatientsTraining Process/MethodologiesAccuracy on the Test SetExternal Validation (N of Patients)Accuracy on the External Validation CohortAlgorithm
Yeh et al. [84]RCC grading39 ccRCCPixels from the nuclei were manually selected to further train a SVM classifier to recognize nuclei.
A person with no special training in pathology engaged in training the classifier with an interactive interface.
AUC: 0.97N.AN.A.WSI analysis with an automatic stain recognition algorithm. An SVM classifier was trained to recognize nuclei. Sizes of the recognized nuclei were estimated, and the spatial distribution of nuclear size was calculated using Kernel regression.
Holdbrook et al. [86](1) RCC grading; (2) survival prediction.59 ccRCCA cascade detector of prominent nucleoli (constructed by stacking 20 classifiers sequentially) was trained with WSI images to extract image patches for subsequent analysis. This pipeline used two nucleoli detectors to extract prominent nucleoli image patches.(1) F-score: 0.78–0.83 grade prediction;
(2) High degree of correlation (R = 0.59) with a multigene score.
N.A.N.A.An automated image classification pipeline was used to detect and analyze prominent nucleoli in WSIs and classify them as either low or high grade. The pipeline employed ML and image pixel intensity-based feature extraction methods for nuclear analysis. Multiple classification systems were used for patch classification (SVM, logistic regression and AdaBoost).
Tian et al. [88](1) RCC grading, (2) survival prediction395 ccRCCSeven ML classification methods were used to categorize grades based on nuclei histomics features were evaluated. Among these methods, LASSO regression demonstrated the highest performance with a built-in feature selection capability. LASSO regression and its optimal hyper parameter selected the final list of histomics features most associated with grade.(1) 84.6% sensitivity and 81.3% specificity grade prediction;
(2) predicted grade associated with overall survival (HR: 2.05; 95% CI 1.21–3.47).
N.A.N.A.Nuclear segmentation occurred, and 72 features were extracted. Features associated with grade were identified via a LASSO model using data from cases with concordancet between TCGA and Pathologist 1. Discordant cases were additionally reviewed by Pathologist 2. Prognostic efficacy of the predicted grades was evaluated using a Cox proportional hazard model in an extended test set created by combining the test set and discordant cases.
AUC = area under curve, ccRCC = clear cell renal cell carcinoma, CI = confidence interval, HR = hazard ratio, DNN = deep neural network, LASSO = least absolute shrinkage and selection operator, N.A. = not applicable, SVM = support vector machine.
Table 3. Studies aimed to uncover molecular-morphological connections and/or AI-based therapy response prediction.
Table 3. Studies aimed to uncover molecular-morphological connections and/or AI-based therapy response prediction.
GroupAimNumber of PatientsTraining Process/MethodologiesAccuracy on the Test SetExternal Validation (N of Patients)Accuracy on the External Validation CohortAlgorithm
Marostica et al. [68](1) RCC diagnosis;
(2) RCC subtyping;
(3) CNAs identification;
(4) RCC survival prediction;
(5) Tumor mutation burden prediction.
(1) and (2): 537 ccRCC, 288 pRCC, and 103 chRCC;
(3) 528 ccRCC, 288 pRCC, and 66 chRCC;
(4) 269 stage I ccRCC;
(5) 302 ccRCC.
(1) Weak supervision approach used for malignant region identification;
(2) Same transfer learning approach trained for 15 epochs;
(3) Independent models for ccRCC, pRCC, and chRCC were developed;
(4) 10-fold cross-validation was employed. Upsampling of uncensored data points was performed in each fold’s training set to enhance the model training process.
(1)AUC: 0.990 ccRCC, 1.00 pRCC, 0.9998 chRCC;
(2) AUC: 0.953
(3) ccRCC KRAS CNA: AUC = 0.724, pRCC somatic mutations: AUC: 0.419–0.684;
(4) Short vs. long-term survivors log-rank test P = 0.02, n = 269;
(5) Spearman’s correlation coefficient: 0.419
(1) and (2) 841 ccRCC, 41 pRCC, and 31 chRCC.(1) 0.964–0.985 ccRCC;
(2) 0.782–0.993
(1) Three DCNN architectures (VGG-16, Inception-v3, and ResNet-50) were compared for each task.
(2) Same transfer learning approach as above was used. The hyperparameters of DCNNs were optimized via Talos.
(3) Two transfer learning approaches were used: gene-specific binary classification and multi-task classification for all genes for CNAs. DCNNs were used for associations between genetic mutations and WSI images.
(4) DCNN models used image patches as inputs, predicting binary values for each patient. Grad-CAM was generated to identify the regions of greatest importance for survival prediction.
Go et al. [104]RCC VEGFR-TKI response classifier; survival prediction.101 m-ccRCCML approaches were applied to establish a predictive classifying model for VEGFR-TKI response. A 10-fold-cross-validated SVM method and decision tree analysis were used for modeling Apparent accuracy of the model: 87.5%; C-index = 0.7001 for PFS; C-index of 0.6552 for OSN.A.N.A.Features that showed the statistical differences between the good and bad-response groups were selected, and the most appropriate cut-off for each feature was calculated.
Secondary feature selection was performed using SVM to develop the most efficient model, i.e., the model showing the highest accuracy with the least number of features
Ing et al. [106](1) RCC vascular phenotypes;
(2) survival prediction;
(3) identification of prognostic gene signature;
(4) prediction models.
(1), (2), and (3): 64 ccRCC;
(4) 301 ccRCC.
A stochastic backwards feature selection method with 1500 iterations was applied to identify the subset of VF with the highest predictive power. Two GLMNET models were trained: one model was trained on VF-risk groups, and the other model was trained using a 24-month disease-free status as the ground truth for a validation cohort.(1) AUC = 0.79;
(2) log-rank
p = 0.019, HR = 2.4;
(3) Wilcoxon rank-sum test p < 0.0511;
(4) C-Index: Stage = 0.7, Stage + 14VF = 0.74, Stage + 14GT = 0.74.
N.A.N.A.Quantitative analysis of tumor vasculature and developement of a gene signature. The algorithms trained in this framework classified with SVM and random forest classifiers, i.e., endothelial cells, and generated a VAM within a WSI. By quantifying the VAMs, nine VFs were identified, which showed a predictive value for DFS in a discovery cohort. Correlation analysis showed that a 14-gene expression signature related to the 9VF was discovered.
The two GLMNET were developed based on these 14 genes, separating independent cohorts into groups with good or poor DFS, which were assessed via Kaplan–Meier plots.
Zheng et al. [112]RCC methylation profile326 RCC
(also tested on glioma)
In total, 30 sets of training/testing data were generated. Binary classifiers were fitted on the training set, and the best parameters were selected using 5-fold cross-validation. Logistic regression with LASSO regularization, random forest, SVM, Adaboost, Naive Bayes, and a two-layer FCNN were used with optimized parameters.Average AUC and F1 score higher than 0.6N.A.N.A.To demonstrate that DNA methylation can be predicted based on morphometric features, different classical ML models were tested. Binary classifiers for each task were evaluated using accuracy, precision, recall, F1-score, ROC curve, AUC score, and precision–recall curves. Scores from 30 training/testing data sets were averaged per task. For logistic regression, feature importance analysis was conducted to rank the influence of morphometric features on the prediction task.
AUC = area under curve, ccRCC = clear cell renal cell carcinoma, chRCC = chromophobe renal cell carcinoma, CNA = copy number alteration, DCNN = deep convolutional neural network, DFS = disease-free survival, FCNN = fully connected neural network, GLMNET = elastic-net regularized generalized linear models, Grad-CAM = gradient-weighted class activation mapping, LASSO = least absolute shrinkage and selection operator, ML = machine learning, N.A. = not applicable, OS = overall survival, PFS = progression-free survival, pRCC = papillary renal cell carcinoma, ROC = receiver operating characteristic, SVM = support vector machine, VAM = vascular area mask, VEGFR-TKI = vascular endothelial growth factor receptor–tyrosine kinase inhibitor, VF = vascular features.
Table 4. Prognostic models.
Table 4. Prognostic models.
GroupAimNumber of PatientsTraining Process/MethodologiesAccuracy on the Test SetExternal Validation (N of Patients)Accuracy on the External Validation CohortAlgorithm
Ning et al. [126]RCC prognosis prediction209 ccRCCThe training procedures employed 10-fold cross-validation. Survival distributions of low- and high-risk groups were estimated using the Kaplan–Meier estimator and compared via the log-rank test. The performance of prognostic prediction was assessed using the C-index.Mean C-index = 0.832 (0.761–0.903)N.AN.A.Two CNNs with identical structures were employed to extract deep features from CT and histopathological images. Histological patches were carefully reviewed by two pathologists to confirm coverage of tumor cells. Global pooling and fully connected layers were utilized at the end of the network to integrate information from all feature maps and make predictions. The BFPS algorithm was employed for feature selection.
Cheng et al. [125]RCC prognosis prediction410 ccRCCA two-level cross-validation strategy was used to validate our method. In the first level, a single patient was chosen as the test set, with the rest used as training sets. The second level was a 10-fold cross-validation performed in the training set to select the best regularization parameter. A regularized Cox proportional hazards model was built on the training set using the selected parameter and based on the model; risk indices of all patients were also calculated.Log-rank test p values < 0.05N.A.N.A.The unsupervised segmentation method for cell nuclei and features extraction was used.
lmQCM was used to perform gene coexpression network analysis.
The LASSO-Cox model for prognosis prediction calculated the risk index for each patient based on their cellular morphologic features and eigengenes
Schulz et al. [127]RCC prognosis prediction248 ccRCCUnimodal training was conducted. This method was followed by multimodal training, which used the pre-trained weights from unimodal training. Training lasted for 200–400 epochs, and the best model was selected based on the convergence of training and validation curves. The standard Cox loss function was employed for survival analysis, while the cross-entropy loss function was used for binary classification tasks.A mean C-index of 0.7791 and a mean accuracy of 83.43%. (prognosis prediction)18 ccRCCMean C-index reached
0.799 ± 0.060 with a maximum of 0.8662. The accuracy averaged at
79.17% ± 9.8% with a maximum of 94.44%.
CNN consisting of one individual 18-layer residual network (ResNet) per image modality (histopathology slides, CT scans, MR scans) and a dense layer for genomic data. The network outputs were then combined using an attention layer, which assigned weights to each output based on its relevance to the task at hand. The combined outputs were passed through a fully connected network. Depending on the specific case, either C-index calculation or binary classification for 5YSS was performed. The 5YSS category included patients who either survived for longer than 60 months or passed away within five years of diagnosis.
5-YSS = 5-year survival status, BFPS = block filtering post-pruning search, ccRCC = clear cell renal cell carcinoma, CNN = convolutional neural network, LASSO = least absolute shrinkage and selection operator, lmQCM = local maximum quasi-clique merging, ML = machine learning, N.A. = not applicable, SVM = support vector machine.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Distante, A.; Marandino, L.; Bertolo, R.; Ingels, A.; Pavan, N.; Pecoraro, A.; Marchioni, M.; Carbonara, U.; Erdem, S.; Amparore, D.; et al. Artificial Intelligence in Renal Cell Carcinoma Histopathology: Current Applications and Future Perspectives. Diagnostics 2023, 13, 2294. https://doi.org/10.3390/diagnostics13132294

AMA Style

Distante A, Marandino L, Bertolo R, Ingels A, Pavan N, Pecoraro A, Marchioni M, Carbonara U, Erdem S, Amparore D, et al. Artificial Intelligence in Renal Cell Carcinoma Histopathology: Current Applications and Future Perspectives. Diagnostics. 2023; 13(13):2294. https://doi.org/10.3390/diagnostics13132294

Chicago/Turabian Style

Distante, Alfredo, Laura Marandino, Riccardo Bertolo, Alexandre Ingels, Nicola Pavan, Angela Pecoraro, Michele Marchioni, Umberto Carbonara, Selcuk Erdem, Daniele Amparore, and et al. 2023. "Artificial Intelligence in Renal Cell Carcinoma Histopathology: Current Applications and Future Perspectives" Diagnostics 13, no. 13: 2294. https://doi.org/10.3390/diagnostics13132294

APA Style

Distante, A., Marandino, L., Bertolo, R., Ingels, A., Pavan, N., Pecoraro, A., Marchioni, M., Carbonara, U., Erdem, S., Amparore, D., Campi, R., Roussel, E., Caliò, A., Wu, Z., Palumbo, C., Borregales, L. D., Mulders, P., & Muselaers, C. H. J., on behalf of the EAU Young Academic Urologists (YAU) Renal Cancer Working Group. (2023). Artificial Intelligence in Renal Cell Carcinoma Histopathology: Current Applications and Future Perspectives. Diagnostics, 13(13), 2294. https://doi.org/10.3390/diagnostics13132294

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop