1. Introduction
Prostate cancer (PCa) is the second most frequent cancer diagnosis in men, and the fifth highest cause of death worldwide [
1]. Early diagnosis of diseases is known to improve patient outcomes, and accurate subtyping and staging can inform better treatment plans and improved quality of life [
2]. This suggests that research into improving how PCa is diagnosed and staged is highly important for patient survival by ensuring that PCa is not underdiagnosed, and molecular research is a key component of this. In addition, the rapidly growing application of artificial intelligence (AI) is transforming molecular imaging in PCa.
The scope for the use of AI in the PCa imaging pathway includes helping early detection and staging of disease, plus the prediction of outcome based on complex patterns in previous data. AI has the potential to support the oncologist in decision-making by providing extra analytical insight and also flagging features in unusual cases. In summary, AI has the potential to provide decision support tools and to support labor-saving.
The diagnosis of PCa relies on the microscopic examination of prostate tissue obtained through needle biopsy. A primary Gleason grade is assigned to the most prevalent histological pattern observed, while a secondary grade is assigned to the highest-grade pattern, based on the microscopic structure and appearance of the cells. Additionally, tests for serum prostate-specific antigen (PSA) variants can help estimate the likelihood of prostate cancer in patients who have previously undergone a negative biopsy. One such test is the PCa antigen 3 test, which involves analyzing urine collected after prostatic massage. This test has been validated in this patient population, demonstrating an 88% negative predictive value for subsequent biopsy [
3]. Novel imaging technology has also been integrated into diagnostic pathways of PCa. A common approach is in Magnetic Resonance Imaging (MRI); however, there has been an increased use in Computed Tomography (CT) and Positron Emission Tomography (PET) in recent years since the application of the molecular biomarker Prostate Specific Membrane Antigen-PET (PSMA-PET) [
4]. While MRI has been extensively covered in previous review articles [
5,
6], there is a lack of a detailed review of PET/CT within PCa. In addition, the role of both PET/CT and AI in prostate cancer imaging is rapidly growing. For these reasons, this article will review the use of PET/CT and AI.
This narrative review will provide an overview of imaging for PCa, with a focus on PET/CT and PSMA-PET, with comments on other modalities, such as MRI and ultrasound, and on histopathology where appropriate. The role of AI applied to imaging for lesion detection, staging, treatment planning and outcome prediction will be discussed.
This review will start with a summary of the use of PSMA for PET/CT. Then, the main body of the review will focus on the most commonly used AI methodologies applied to PCa imaging, including radiomics, convolutional neural networks (CNNs), the use of unsupervised and semi-supervised learning and generative adversarial networks (GANs) with applications to lesion detection, staging, treating planning and outcome prediction. This review will highlight the differences between supervised, semi-supervised, weakly supervised and unsupervised learning and their applications to PCa. Each section will begin with an explanation of the technology, how it functions and will then continue on to its applications.
2. PSMA-PET/CT as the Preferred Imaging Modality for Identifying Prostate Cancer Spread
PSMA-PET offers an innovative approach to accurately staging PCa. A multi-center randomized trial comparing conventional imaging to Gallium (Ga)-PSMA-PET showed its superiority in identifying pelvic nodes accurately (92% vs. 65% with conventional imaging). Increasingly, clinicians are using PSMA-PET, especially in patients with high-risk disease [
4].
Another very important application of PSMA-PET is the detection of distant lesions and clarification of indeterminate lesions on conventional imaging in disease relapse. In the case of biochemical failure, PSMA-PET changes the intended treatment and allows for a more personalized approach in nearly 50% of cases, as quantified in a single arm study in post-prostatectomy patients [
7].
The European Association of Urology (EAU) endorsed the use of PSMA-PET in patients with biochemical relapse following radical treatment; at the same time, there was a consensus against the use of PET in established metastatic PCa [
8].
A Health Economic analysis of the added value of PSMA-PET to patient care identified a cost saving and, also, a more effective outcome in terms of life years saved [
9]. Nevertheless, there is scope to improve the clinical application further and adopt AI to enhance the benefits.
3. Radiomics Applications
3.1. Introduction
Radiomics is based on the concept that standard-of-care medical images contain inherent information in the detailed structure of the image. It is believed that this inherent information is a result of the underlying tissue heterogeneity, including its micro-environment. Radiomics analysis involves the analysis of the relationships between voxels in the image, using the application of filters to extract specific feature classes which are then numerically analyzed to determine numerical characteristics, such as information content and repeatability of voxel intensities [
10].
These features are often described as being invisible to the human eye [
11] and hence, provide new information to the patient management workflow. The numerical description of the features makes them suitable for statistical and AI analysis. Commonly used radiomics features are texture measures and may be classified by the complexity of the relationships between voxel intensities [
12].
First-order features analyze the histogram of intensities within a region, such as the tumor mass and include variance, skewness and kurtosis. Second- and higher-order features evaluate the relationships between pairs or groups of voxels. The most commonly used are the following: grey-level co-occurrence matrix (GLCM—the relationship between pairs of voxels); grey-level size zone matrix (GLSZM—the number of connected voxels with the same intensity), grey-level run length matrix (GLRLM—the number of connected voxels with the same intensity in a given direction); and neighboring grey tone difference matrix (NGTDM—the average difference between the intensity of a voxel and its neighbors). Other features include shape parameters (such as mean 2D and 3D radius) and fractal parameters (such as fractal dimension) [
13]. Radiomics analysis is usually carried out using publicly available toolboxes such as Pyradiomics [
14] and the package of Vallières [
15] implemented in Matlab. There is much interest in bringing the image-based information from radiomics into the multiomics framework with the idea of combining biomolecular-level information with imaging. Zanfardino et al. [
16] present a Multi Assay Experiment (MAE) approach to achieve this and present a case study based on MRI for breast cancer.
The clinical application of radiomics involves determining the relationship between the cohort of features extracted from the images and the clinical application of interest. In PCa, the common imaging modalities are MRI (including T1, T2 weighted and diffusion-weighted) transrectal ultrasound, conventional CT, conebeam CT and molecular imaging, often in the form of PET/CT, with tracers such as radiolabeled Prostate Specific Membrane Antigen (PSMA) [
17] and fluorine labeled 18F-choline [
18]. An analysis of the relationship between radiomics features has been carried out using standard statistical approaches as well as AI approaches.
3.2. Radiomics and AI
AI analysis is often carried out using supervised learning, in which the AI system is given the radiomics data as input, and the output is a prediction of the clinical application. The AI system is optimized until it achieves the desired sensitivity and specificity in predicting the answer to the clinical application. The AI methods used range from the simplest, such as the linear Support Vector Machine (SVM), to deep learning with multilayer neural networks. Outstanding challenges with radiomics include the following: (1) understanding the relationship between the signals generated from analysis of images with voxels of mm dimension and the underlying biological properties at the molecular level; (2) the scale of the problem, as studies usually have access to patient datasets in the hundreds and the number of radiomics features generated may be in the thousands, so there is an overfitting challenge; (3) related to 2, whether the radiomics dataset can be reduced a priori to reduce dimensionality of the problem. We now consider the clinical applications of radiomics in PCa.
In terms of imaging modalities, there are a number of reviews of radiomics in MR for PCa [
5,
19,
20,
21]. For this reason, this review will focus on radiomics in molecular imaging in PET and in CT.
3.3. Applications in PET
In molecular imaging in PET, studies have evaluated radiomics features of PSMA scans for lesion detection and characterization, assessment of Gleason score (GS), to characterize disease risk and for outcome prediction. Moazemi et al. [
22] developed a radiomics model to differentiate pathological and physiological tracer uptake for Ga-labeled PSMA scans. The concept was to develop a tool to aid the radiologist in characterizing hotspots between intraprostatic lesions (IPLs) and normal prostatic tissue (NPT). Input data were 2419 hotspots from 72 patients. A total of 40 radiomics parameters were generated from the PET and low-dose CT scans. Erle et al. [
23] further developed this work with data from 87 patients (72 for training and 15 with multiple hotspots for validation). With 77 radiomics features, they achieved a receiver operating characteristic (ROC) area under the curve (AUC) of 0.98 with 0.97 sensitivity and 0.82 specificity. Detection of IPLs was addressed by Zamboglou et al. [
24]. The premise was that IPLs that may be missed by visual inspection might be detected by radiomics. Patient data consisted of a training set of 20 and an external validation set of 52 cases. Histology was used as the gold standard. A total of 154 radiomics features were used. In the training dataset, visual inspection missed lesions in 60% of the patients. Two radiomics features, based on analysis of local binary patterns (LBP), detected visually unknown lesions with an AUC of 0.93. For the validation set, visual inspection missed lesions in 50% of patients, but the LBP radiomics features yielded sensitivity values above 0.80. Domachevsky et al. [
25] also discussed IPL versus NPT hotspot detection. They evaluated the suitability of PET PSMA SUVmax (maximum standardized uptake volume) and the apparent diffusion coefficient (ADC) from diffusion-weighted MRI as imaging biomarkers for distinguishing IPLs from normal tissue. Data from 22 patients yielded 22 IPLs and 22 NPTs. Results show significant statistical differences between IPLs and NPTs for SUVmax, ADCmin and ADCmean and conclude these are suitable for a radiomics model for lesion detection.
Alongi et al. [
26] evaluated the use of radiomics with 18F-choline PET images for disease outcome prediction coupled with sub-group analysis by TNM staging. They analyzed data from 94 high-risk patients consisting of 18F-choline images for restaging and follow-up data. They extracted 51 radiomics features, using LIFEx software (v6.65) [
27] and statistics-based feature reduction. For the whole dataset, two first-order histogram features were able to predict disease progression (67.6% accuracy). For sub-group analysis based on TNM staging, the numbers of features and accuracy were as follows: T:- 3 features, 87% accuracy; N:- 2 features, 82.6% accuracy; M:- 2 features, 72.5% accuracy. Risk stratification using the PSMA tracer DCFPyL was evaluated by Cysouw et al. [
28]. In this prospective study, 76 medium- to high-risk patients who underwent radical prostatectomy and PSMA PET/CT had their primary tumor delineated and 480 radiomics features calculated per tumor. Random forests were trained with the radiomics features to model GS (≥8), lymph node involvement (LNI), metastasis and extracapsular extension (ECE), producing AUC values of 0.81, 0.86, 0.86 and 0.76, respectively. Using standard PET metrics produced lower AUC values of 0.76, 0.77, 0.81 and 0.67.
Yao et al. [
29] evaluated the effect of outlining threshold as a percentage of SUVmax on prediction of GS, ECE and vascular invasion (VI) using LIFEx [
26] and an SVM, for PSMA scans of 173 patients, divided into 122 training and 51 testing groups. Thresholds between 30% and 60% of SUVmax were evaluated. The optimum thresholds were as follows: for GS, 50% with AUC ≥ 0.80 for both training and test set; for ECE, 40% with AUC 0.77 and for VI, 50%with AUC 0.74. The recommendation was that SUVmax values of 40–50% are optimal for radiomics modelling of biological characterization of PCa. A second study by Zamboglou et al. [
30] also addressed the use of radiomics to predict IPLs, GS and LNI from PSMA PET scans. They trained the model with 20 cases and validated it with a further 40 cases. Strong spatial correlations between histopathology and radiomics (>76%) showed the ability to distinguish between IPLs and NPT. A single texture feature distinguished between GS 7 and ≥ 8 (AUC = 0.91 prospective and 0.84 validation data). The same feature also distinguished between N1 and N0 nodal status (AUC = 0.87 prospective and 0.85 validation). Moazemi et al. [
31] used pretherapeutic Ga-labeled PSMA scans to model overall survival of patients treated with 177Lu-PSMA. They retrospectively analyzed data for 83 patients with advanced PCa. The parameters used were 73 radiomics and 22 clinical features. The Cox proportional hazard and LASSO were used to select the most relevant features: SUVmin and kurtosis of the histogram. The Kaplan–Meier analysis was then used to evaluate the radiomics and clinical features for predicting outcome. Results showed that a radiomics signature based on SUVmin and kurtosis plus several other features showed a
p-value < 0.05, supporting the hypothesis that radiomics using pre-therapeutic scans may be able to predict overall survival.
3.4. Applications in CT
Most uses of CT are not considered to be molecular imaging, but CT can provide tissue functional information, for instance, in contrast-enhanced CT. In addition, radiomics studies with CT often assume that the CT signal provides information on functions such as testing surrogacy for molecular imaging as in PET. Acar et al. [
32] retrospectively modelled the responses of 75 patients treated for PCa with known bone metastasis. The clinical question was the use of CT radiomics for differentiation between metastatic lesions with PSMA expression and sclerotic regions that have responded to treatment (and no PSMA expression). A range of machine learning approaches were used including KNN, SVM and decision trees. Radiomics parameters were generated using LIFEx [
27]. AUC values were between 0.63 and 0.76. The conclusion was that radiomics with machine learning can distinguish between metastatic lesions and sclerotic regions on CT scans. Peeken et al. [
33] developed a model for lymph node metastasis detection using CT data. They used data for 80 patients who were treated with radio-guided surgery for resection of PSMA-positive metastases. 47 patients’ data were used for training and 33 for validation. Histological nodal status was used as a reference. A total of 156 radiomics features were extracted. The best radiomics model gave the best predictive performance (AUC = 0.95). This performed significantly better than conventionally used parameters such as lymph node short diameter. Bosetti et al. [
34] studied risk stratification and biochemical relapse using weekly conebeam CT (CBCT) scans of 31 patients treated with radiotherapy. Pyradiomics [
8] was used to extract radiomics features. 15 features were selected: histogram and shape-based. Logarithmic regression was used to predict tumor stage, GS, PSA, risk stratification and biochemical recurrence. Results showed AUC values of 0.78–0.80, 0.80–0.82, 0.83, 0.83 and 1.00, respectively.
Osman et al. [
35] also evaluated CT scans for GS estimation and risk stratification. They used radiotherapy planning CT scans of 342 patients. The RadiomiX software (v3.0.1) used generated 1618 features, which were reduced to 522 after stability analysis. Their results distinguished between GS ≤6 and ≥7 with AUC = 0.90, and for GS 7, between 3 + 4 and 4 + 3 with AUC = 0.98. In terms of risk stratification, low- versus high-risk group distinction had an AUC = 0.96. Mostafaei et al. [
36] studied the use of CT radiomics coupled with clinical and treatment (dose-volume) parameters to model toxicities in radiotherapy for 64 patients. The toxicities studied were urinary and gastro-intestinal (GI). Three sets of models were developed: radiomic (R—imaging only), clinical/dosimetric (CD) and radiomic/clinical/dosimetry (RCD). A total of 31 developed grade 1 or above GI and 52 urinary toxicities. For GI toxicity modelling, AUC values were 0.71, 0.66, 0.65 for R, CD and RCD, respectively, and for urinary, 0.71, 0.67 and 0.77, respectively. Tanadini-Lang et al. [
37] demonstrated radiomics in CT perfusion scans for PCa with the specific aim of predicting tumor grade and aggressiveness. Data from 41 patients were analyzed. 1701 radiomics parameters were reduced to 10 using PCA (principal component analysis), which were then used in multi-variate analysis. Weak correlation was found between GS and radiomics parameters. The same parameter with the interquartile range of the mean transit time (MTT) from the perfusion scans was found to be useful for risk group prediction (AUC = 0.81). Two different radiomics parameters distinguished risk groups (AUC = 0.77).
3.5. Conclusions
To conclude this discussion on radiomics, radiomics shows much promise as a component of the AI toolbox in PCa imaging. There are many promising results from early studies, but challenges remain in this relatively new field, including reproducibility of results between centers and translation of models between patient groups. There is a need for larger independent datasets to test models. The reproducibility of radiomics features and their relationship to biomedicine needs further study [
38], and it is important that methodologies are reported in sufficient detail to enable others to reproduce results [
39].
4. Convolutional Neural Networks
4.1. Introduction
Convolutional neural networks (CNNs) were introduced in 1980 by Fukushima et al. [
40]. The modern CNN consists of distinct layers, these being the convolutional layer, then the pooling layer. In the convolutional layer, a kernel, or filter, moves over the input data performing elementwise multiplication, effectively summing the results into a single output pixel. These convolutions create a feature map of the input, and these feature maps can highlight edges and irregularities within an image. The pooling layer takes the feature map generated by the convolutional layer and either takes the maximum value within the kernel or takes an average of the values within the kernel. This reduces the dimensionality of the image, allowing for less important information to be discarded but important features to be kept.
4.2. Applications
Classification of malignancy is a common application of AI to PCA. Many studies attempt to automatically determine if a patient will have malignancy or not, such as the work proposed by Hartenstein et al. [
41], in which they develop a CNN to determine if 68Ga-PSMA-PET/CT-Lymph node status can be found via just the CT. The dataset consisted of 549 patients who received 68Ga-PMSA PET/CT scans. Three separate models were trained: for infiltration status, lymph node location and masked tumor locations. These models performed well with an area under the curve (AUC) of 0.95, 0.86 and 0.86, respectively. This was higher than the average uro-radiologist performance with an AUC of 0.81. Similar to this, Di Xu et al. [
42] proposed a 2.5-dimension metastatic pelvic lymph node detection algorithm with CT only. This model achieved a sensitivity of 83.35% and an AUC of 0.90. Borrelli et al. [
43] proposed a lymph node metastatic detection model for 18F-Choline PET/CT in 399 patients. This model outperformed a second human reader with 98 lesions detected, compared to the 87 lesions from the reader. Ntakolia et al. [
44] proposed a lightweight CNN to classify bone metastasis. This model had fewer parameters than many other bone metastasis models, hence reducing the complexity of the model. The dataset consisted of 817 PCa patients with scintigraphy scans. This model managed to achieve a sensitivity of 97.8% and specificity of 98.4%. Full classification accuracy was 97.41%.
Capobianco et al. [
45] proposed a dual tracer learning model on 173 patients with 68Ga-PSMA-11 PET/CT to classify full-body uptake in PCa patients. This was carried out by passing the patient data through a CNN to classify sites of elevated tracer uptake as either suspicious or non-suspicious. These results were then assigned to an anatomical region. In total, of the 5577 high-uptake regions which were annotated, 1057 were suspicious. The model provided an average accuracy of 78.4% for suspicious regions, and a 94.1% accuracy for all uptake. Tragardh et al. [
46] proposed a primary tumor and metastatic disease analysis model for whole-body 18F-PSMA1007 PET/CT. A total of 660 patients were analyzed with full-body PET/CT scans. This model managed to provide a sensitivity of 79% for detecting lesions, 79% for lymph node metastasis and 62% for bone metastasis. This was in contrast to the nuclear medicine physician sensitivities of 78%, 78% and 59%, respectively.
Jong Jin Lee et al. [
47] proposed a recurrence detection algorithm with 18F-fluciclovine PET. Three models were trained. One model was trained with a single slice approach, one trained with a 2D case-based approach and one with a full 3D approach. The dataset consisted of 251 patients with PET scans labeled as either normal, abnormal or indeterminable. The 2D CNN slice-based approach had a sensitivity of 90.7% and a specificity of 95.1% with an AUC of 0.97. The 2D case-based approach achieved a sensitivity of 85.7% and a specificity of 71.4% with an AUC of 0.75. The 3D case-based approach achieved a sensitivity of 71.4%, a specificity of 71.4% and an AUC of 0.70.
These models show that many models perform comparably to or sometimes better than professional radiologists showing the potential predictive power that CNNs can have for detecting and classifying a variety of problems in PCa, from tumor classification to metastatic location classification. This analysis highlights that while these models perform well, there is potential for larger datasets from different centers to be able to verify these models.
Compared to classification, segmentation classifies individual pixels based on a given mask; pixels inside the mask are classified as positive, while outside, they are classified as negative. With many PCa problems, there are much more negative pixels than positive which makes the problem difficult. Kostyszyn et al. [
48] proposed a CNN to segment the Gross Tumor Volume (GTV) in primary PCa patients. The dataset had 209 patients from three separate centers who had PSMA-PET. The model utilized the 3D U-Net architecture to segment the GTV volume. The median Dice similarity scores for different center cohorts were 0.84, 0.81 and 0.83. Sensitivity and specificity were 98% and 76% for the first cohort and 100% and 57% for the second cohort. Wang et al. [
49] proposed a dual attention mask R-CNN on PET/CT to segment prostate and the dominant intraprostatic lesions. The dataset consisted of 25 patients with PET/CT scans. The first network of the pair attempted to locate a rough, initial ROI in the patient, while the second performed the segmentation. The model had a Dice similarity score of 0.84 ± 0.09 (SD—1 standard deviation).
Matkovic et al. [
50] similarly proposed a cascaded regional-net for prostate and dominant intraprostatic lesions. This model used 49 patients and achieved a Dice similarity score of 0.93 ± 0.06 for the prostate and 0.80 ± 0.18 for dominant intraprostatic lesions. Rainio et al. [
51] proposed a method of choosing separate thresholds for each PET image slice with a CNN to label pixels directly on the slice. This was combined with a CNN which used constant thresholding to pick the optimal thresholds. The dataset consisted of 78 PCa patients. The average Dice similarity score was 0.72 ± 0.41 for the variable thresholding, and 0.69 ± 0.38 for the mixed method. Holzschuh et al. [
52] proposed a CNN trained on 18F-PSMA-1007 PET to segment intraprostatic GTVs in primary PCa. The model was trained on 128 patients, tested on an independent internal cohort of 52 patients and externally validated on three datasets with 14, 9 and 10 patients with different radiotracers. The median dice scores were 0.82 for the internal set, 0.71 for an external set with the same tracer, 0.80 for the external set with 18F-DCFPyL-PSMA and 0.80 for the external set with 68Ga-PSMA-11.
Zhao et al. [
53] proposed a classification and segmentation model on 68Ga-PSMA-11 PET/CT images. There were 193 patients in the cohort trained on triple-combining a 2.5D U-Net. The network performed well with a precision, recall and F1 score of 0.99 for bone lesion detection and 0.94, 0.89 and 0.92 for lymph node detection. Segmentation achieved average dice scores of 0.65 and 0.55 for bone and lymph node lesions, respectively. Ghezzo et al. [
54] externally validated a CNN trained to segment prostate GTVs. This model was validated on 68Ga-PSMA-PET images, with 85 patients being included in the dataset. The model achieved a median dice score of 0.74, and the models were robust between modalities and ground truth labels. These segmentation models show that it is entirely feasible to use CNNs to segment sites of risk in PCa. As mentioned in the classification section, there is a need for larger scale datasets and models to be trained, and the black box nature of CNNs does not lend well to explaining how the models come to their conclusions.
5. Unsupervised Learning
Most models in clinical applications tend to be supervised, as they are performing some diagnostic or segmentation task and are trained using ground truth information. Unsupervised methods potentially have greater clinical utility as they can use the large volumes of available data, without the need for time consuming and costly-to-acquire expert labeling. Some unsupervised models have been developed or utilized for traditionally supervised tasks; however, they still require labeled data for validation of model outputs as is the case for supervised models.
Unsupervised learning is a crucial tool in AI for pattern recognition, and this can provide powerful tools for knowledge discovery in medical data. In the medical context, these methods would primarily be used to cluster or subtype patients, learn latent features in the inputs that may be used for downstream supervised tasks [
55,
56,
57,
58,
59], such as feature selection, computer-aided detection [
60] or for visualization of data and machine learning (ML) results [
61]. One core methodology in unsupervised learning is the use of clustering methods to obtain group membership of data based on selected input features [
55]. The use of clustering in medical imaging is extensive, with applications in areas such as segmentation in CT colonography [
62] and 3D segmentation [
59]. Dimensionality reduction methods have also been used in AI workflows, such as in the visualization of labeled data and to model outputs such as risk scores [
63].
An emerging trend is the use of generative AI in medical imaging for tasks such as modality transfer or to increase the number and variation of images in the dataset [
64]. Here, typically, you have pairs of images where the matching between images constitutes a label. Modality transfer is a growing area of interest, but the clinical utility is limited due to barriers around validation and verification of the synthesized images. Instead, most uses of generative models are for increasing the available data for training models or in the training of adversarial models such as GANs [
64,
65].
The emergence of transfer learning has enabled applications with small amounts of data to access deep learning models trained on very large datasets from a different domain. Typically, this is used in supervised tasks, but it also enables the use of the pre-trained network as a feature extractor for clustering or classification tasks. The performance of transfer learning models demonstrates their potential impact in clinical tasks [
66,
67,
68], such as lesion detection [
69] and combining patient imaging and demographics data [
70]. Pretrained models transferred from one task to another have been shown to perform at least as well as models trained (from random network weights) for the specific task, and fine-tuning these models improves robustness [
69].
The so-called next generation of AI is the use of foundational models, where a model is trained on many diverse datasets in order to perform multiple tasks [
71]. These can help generalize models and reduce the need to have large amounts of labeled data in a specific domain [
72,
73], and these have been used for analysis of patient records [
74,
75] and classification of pathologies [
76], with potential for patient support systems [
77,
78].
6. Semi-Supervised Learning
6.1. Introduction
In the domain of deep learning, limited labeled datasets can lead to overfitting, emphasizing the need for high-quality patient data to minimize bias in clinical practice. Nevertheless, addressing this challenge is complicated by privacy and ethical concerns surrounding medical data. Moreover, the scarcity of labeled data for training deep learning algorithms makes manual labeling both expensive and reliant on physician expertise. A potential mitigating solution to this issue is semi-supervised learning (SSL). SSL improves model performance by integrating a restricted set of labeled data with a more extensive pool of unlabeled data.
Present SSL solutions are structured around three fundamental assumptions [
79]: (1) smoothness, positing that similar images share similar labels; (2) low-density, asserting that decision boundaries avoid high-density regions in the feature space; and (3) manifold, contending that samples on the same low-dimensional manifold within the feature space bear the same label. Exploring these assumptions, SSL methods can broadly be categorized into pseudo-label-based SSL [
80,
81,
82] and consistency-based SSL [
83,
84,
85].
In pseudo-labeling SSL [
80,
81,
82], the model’s predictions on unlabeled data serve as additional virtual labels, providing a form of supervision for the model to learn from the unlabeled samples. This helps in leveraging the information present in the unlabeled data to improve the model’s performance. However, pseudo-labeling SSL assumes that the model’s predictions on unlabeled data are reasonably accurate, which means that if the model’s predictions on unlabeled data are noisy or unreliable, it may negatively impact the overall performance. Therefore, careful consideration and validation of the pseudo-labeling process are crucial for successful implementation. On the other hand, consistency learning [
83,
84,
85] exposes the model to perturbed versions of the same input and encourages it to provide consistent predictions, so it can learn more robust and generalizable representations. This approach leverages the unlabeled data to encourage the model to capture the underlying structure of the data, which can lead to improved performance, especially when labeled data are scarce. Consistency learning typically exhibits higher accuracy than pseudo-labeling. This discrepancy can be attributed to the fact that pseudo-label methods overlook a portion of the unlabeled training dataset during the training process, consequently diminishing their generalization capabilities.
Given the superior performance of consistency learning, we now focus on the main approaches proposed in the field. The Π model [
83] introduces a weak-strong augmentation scheme, generating pseudo labels through weakly augmented data while making predictions based on its strongly augmented version, incorporating noise perturbation or color-jittering. To enhance the stability of these pseudo labels, the Π model variant, Temporal Embedding [
83], employs an exponential moving average (EMA) method, accumulating historical results. Despite its stability improvement, Temporal Embedding incurs high hardware costs associated with storing historical results.
Addressing this issue, Mean Teacher (MT) [
85], a widely adopted semi-supervised structure, overcomes the challenge by ensembling network parameters with a “teacher” network through EMA transfer to produce pseudo labels. However, a notable drawback of consistency-learning methods, including Mean Teacher, is their tendency to converge to the same local minimum during training [
81], resulting in both teacher and student models exhibiting similar behavior for many complex data patterns.
6.2. Applications
In medical image segmentation, there has been a notable surge in interest in semi-supervised learning, particularly with a focus on segmenting uncertain regions within pseudo labels. Recent work [
86,
87,
88] introduces uncertainty-aware mechanisms that utilize Monte Carlo dropout to estimate uncertainty regions associated with pseudo labels. Others, such as [
89,
90], progressively learn unsupervised samples by considering prediction confidence or the dissimilarity between predictions from different scales. An alternative approach by [
91] involves estimating uncertainty through a threshold-based entropy map, while [
92,
93,
94] gauge uncertainty by calibrating predictions from multiple networks.
While these methodologies aim to avoid potential noise from unlabeled data, they unintentionally sideline the learning of potentially accurate pseudo labels, leading to insufficient convergence, especially in the face of complex input data patterns. Addressing this concern, ref. [
95] proposes a complementary learning approach, leveraging negative entropy based on inverse label information. Refs. [
96,
97,
98] successfully explore both negative and positive learning techniques to strike a balance between steering clear of learning from potentially noisy regions and mitigating insufficient convergence.
In pursuit of good generalization, certain studies [
99,
100,
101] employ adversarial learning, while others [
102,
103,
104,
105] leverage perturbations across multiple networks to enhance consistency. Despite their success in recognizing unlabeled patterns, these works have overlooked the importance of in-context perturbations in input data, crucial for prompting the model to discern spatial patterns. Addressing this gap, ref. [
106] has introduced a contrastive learning framework aimed at capturing such spatial information in the segmentation of urban driving scenes. However, their contrastive learning framework relies on potentially erroneous segmentation predictions, introducing a risk of confirmation bias in the pixel-wise positive/negative sampling strategy, leading to suboptimal accuracy in handling complex medical images. Additionally, their framework does not explore network perturbations, thereby limiting its generalization capacity.
In the specific area of single-photon emission computerized tomography (SPECT), Apiparakoon et al. [
107] applied a semi-supervised learning (SSL) technique known as the Ladder Feature Pyramid Network (LFPN). LFPN is distinctive for incorporating an autoencoder structure within the ladder network, enabling self-training using unlabeled data. While LFPN, when utilized in isolation, achieves a slightly lower F1-score compared to self-training methods, it significantly outperforms in terms of efficiency, demanding only half the training time.
Moreover, in addressing the challenge of limited labeled data, alternative strategies have been proposed. Some studies advocate for pretraining the model by leveraging unlabeled data obtained from related datasets, as suggested in prior research [
108,
109]. This pretraining approach seeks to enhance the model’s performance by leveraging additional information from related datasets, thereby compensating for the scarcity of labeled data in the specific domain of interest, such as SPECT imaging.
7. Generative Neural Networks
7.1. Introduction
Generative adversarial networks (GANs) are a type of machine learning architecture which consists of two models which play a game [
110],
Figure 1. One network is the generator defined as pmodel(x), which can draw samples from the distribution pmodel. Generators are defined by a prior distribution
p(z) over a vector z which is the input to the generator function: G(z; θ(G)), where θ(G) is a set of learnable parameters which define the generators strategy for the game. The input vector z is a source of randomness, similar to a seed in a pseudorandom number generator. The prior distribution
p(z) is usually an unstructured distribution. Therefore, the goal of the generator is to learn the function G(z) that transforms the unstructured noise z into realistic samples. The other network is the discriminator. The discriminator identified samples x and returns an estimation Dx; θ(D) of whether x is taken from the real dataset, or taken from the pmodel generated by the generator. Each model has a cost function: JG(θ(G), θ(D)) for the generator and JD(θ(G),θ(D)) for the discriminator. The discriminator’s cost encourages it to classify the real data correctly, and the generator’s cost encourages it to generate samples that the discriminator will incorrectly classify.
7.2. Applications
The clinical applications of generative models are wide, as shown in
Table 1, ranging from Dosimetry prediction, tumor segmentation and dose plan translation.
Xue et al. [
111] proposed a 3D GAN model to predict the required post-therapy dosimetry on patients with metastatic castration-resistant PCa (mCRPC). The dataset consisted of 30 mCRPC patients, with 48 treatment cycles with 68Ga-PSMA-11 PET/CT. The model utilized a 3D U-Net as the generator and a standard CNN as the discriminator; a further dual-input model was created which incorporated the CT and PET information. The model also used voxel-wise loss alongside an image-wise loss function for better synthesis. Results from this model consist of a voxel-wise mean absolute percentage error of 17.57% ± 5.42% (one standard deviation), and the dual input model achieved a mean absolute percentage error of 18.94% ± 5.65%.
Murakami et al. [
112] proposed an architecture known as a Pix2Pix GAN to translate CT and structure sets into a fully automated dose plan. The motivation for this was to reduce the need to annotate patient structures, which is required for previous dose prediction models. The dataset consisted of 90 IMRT (intensity modulated radiotherapy) plans of PCa patients’ treatments. Each of these patients was prescribed 78 Gy in 39 fractions to a planning target volume (PTV) which could be covered by 95% of the prescribed dose. The PTV was generated by adding 5 mm margins to the clinical target volume (CTV). The matrix size of the RT-Dose array was converted to 512 × 512 images with 16 bits to match the CT image size. The Pix2Pix model focuses on generating synthetic images from a source to a target image, in this case, CT and structures sets to RT-dose images. The final dose differences were within approximately 2% of the PTV (except D98% and D95%) and approximately 3% for organs at risk for the CT-based dose prediction. The structure-based approach was within 1% of the PTV and 2% for the organs at risk. Mean prediction time for the model was approximately 5 s.
Sultana et al. [
113] proposed a 3D U-net Generator and fully connected network discriminator architecture to segment the region of interest in PCa patients. The dataset consisted of 115 PCa patients with CT scans of the pelvic region. Manual contours of the prostate, bladder and rectum were drawn by a radiation oncologist. The CT scans were down-sampled from dimensions 512 × 512× [80–120 slices] to 118 × 188 × [48–72 slices] on which the coarse segmentation model was trained. Following this, a fine segmentation model was trained on the ROIs generated by the coarse segmentation model. This model had state-of-the-art performance with a Dice similarity coefficient (DSC) of 0.90 ± 0.05 for the prostate, 0.96 ± 0.06 for the bladder and 0.91 ± 0.09 for the rectum. Zhang et al. [
114] proposed an ARPM-Net for the prostate and the organ-at-risk segmentation in pelvic CT images. The dataset consisted of 120 patients with CT scans and structure sets. The segmentation network followed a U-net structure with added Markov Random field blocks which reduced the parameters of the model by half plus local convolutions. Results for this achieved higher DSC scores than the baselines generated in the model, giving the following: prostate with a DSC of 0.88 ± 0.09, bladder at 0.97 ± 0.08, rectum at 0.86 ± 0.08 and the left and right femurs both at 0.97 ± 0.01.
Heilemann et al. [
115] proposed a U-Net CycleGAN to segment various tumors, including prostate. The prostate dataset consisted of 308 patients; however, models were created of various patient cohorts with the best models being tested on 179 patients. Structure sets and organs at risk were delineated by an experienced radiation oncologist. The U-net and GAN combination provided good DSC scores: rectum had a DSC of 0.84 ± 0.08, bladder 0.89 ± 0.08 and the left and right femurs both at 0.93 ± 0.06.
Chan et al. [
116] proposed a variety of Cycle-GAN architectures to remove under-sampling artifacts and correct the image intensities of 25% dose CBCT images, creating a synthetic planning CT. The dataset consisted of 41 PCa patients who received volumetric modulated arc therapy (VMAT). Unpaired 4-fold validation was used to enable the median of four models to be used for the model output. The DSC for the anatomical fidelity of the bladder achieved an average test score of 0.88 with the standard Cycle-GAN and an average score of 0.92 for a residual GAN. The rectum achieved an average score of 0.77 for the standard GAN and a score of 0.87 for the residual GAN.
Pan et al. [
117] proposed a diffusion probabilistic model to automatically synthesize a variety of different medical images. Diffusion models are an alternative approach to image generation; these start by gradually adding noise to the dataset and train a model to learn to denoise the images. As the model denoises the images from random noise, the model effectively learns how to generate samples that are purely synthetic and are not a part of the original cohort. The cohort for the prostate segment of their study consisted of scans of 93 patients, totaling 4157 2D CT slices in total. These were passed through the diffusion model and then subsequently classified into pelvic or abdomen CT with an accuracy of 89% and sensitivity of 82% for the fully synthesized dataset, and 93% accuracy and 90% sensitivity for a mix of synthetic and real data.
8. Discussion
The role of bio-imaging, functional and anatomical imaging is essential to the successful management of PCa. In terms of imaging with molecular-based signatures, MR plays a pivotal role. There are a range of review articles that discuss the role of MR and the application of ML and AI to MR [
6,
21,
118,
119,
120], and the reader is referred to these for more information on this area. PET/CT molecular imaging is also pivotal and is playing a growing role, particularly with the advent of PSMA and the availability of new radioisotopes and radiopharmaceuticals. For these reasons, this review has focused on PET/CT imaging of PSMA.
Outstanding challenges in AI in biomolecular imaging for PCa include the role of the clinician. It is the view of the authors that AI will be used as a tool to support the clinical decision process, particularly providing support for borderline cases. Furthermore, the role of AI decision systems will evolve particularly as the AI systems become more accurate. Dataset size in cancer is small in AI terms. Often, the original algorithms are demonstrated on datasets of hundreds of thousands of images, whereas the imaging datasets contain smaller numbers, often one to several hundreds. This can be ameliorated by the choice of training strategy as discussed above. A second unique feature of cancer imaging datasets, particularly in screening and disease detection, is that the data may be imbalanced, for example, with a predominance of disease-free scans. The approaches to ameliorate this problem are also discussed above. Other challenges include bias in the dataset. If the AI model is trained on data from one population or gender, then it may be less suitable for another population. Explainability is a challenge in many AI applications but is especially important in healthcare. If an AI system predicts poor outcome, for instance, what are the features in the data that prompt this prediction? This is not only important in quality assuring the results of the system but may have an impact in changing a poor prognosis to a good one.
The shape of AI in the ‘real-world’ clinical practice is a contentious point. A total of 276 radiologists from different countries were asked regarding their experience in integrating AI tools, and only 17.8% experienced difficulties. It was seen in 23.4% of the cases that any benefit reduces the workload, but only 13.3% of radiologists would like to invest in AI tools. Hence, we are seeing some skepticism amongst stakeholders [
121]. Other challenges to the use of AI in this area include the requirement for a regulatory framework that encompasses the use of AI, how the role of AI is incorporated into the patient management workflow, considerations of commercialization, ownership of data and how the development of the AI system is managed. For example, Zhang et al. [
122] identified five subject areas which influence the trustworthiness of medical AI. Considerations of these five subject areas will allow for easier integration of AI into the clinic.
A range of groups are making available cancer-imaging datasets to aid development of machine learning and AI systems in cancer [
123]. This is in its infancy, and these datasets will provide a gold standard with which to compare different algorithms to test their relative strengths and weaknesses and to support the future development of new AI methods. Furthermore, given the need for large datasets for AI systems and the need for sufficient heterogeneity in the dataset to avoid bias and misdiagnosis of smaller class groups, there is a strong need for multi-center collaboration to develop large datasets and to enable data-sharing. This will also enable the development of future AI systems.
9. Conclusions
This review has discussed the role of ML and AI in molecular imaging for PCa with an emphasis on PSMA-based PET/CT. This is a rapidly evolving field with many clinical application papers cited from the past few years and rapidly developing AI approaches.
The current status is that there are a range of studies demonstrating the use of AI in PET/CT for PCa which demonstrate a range of clinical measures such as lesion detection, outlining and outcome prediction. In many cases, these produce as-good-as or better results than the human, but the studies are often small in scale and on a single patient population. There is the need to upscale such studies to larger, more heterogeneous populations. The growing availability of new AI methodologies that are tailored to smaller datasets and unlabeled and semi-labeled data show great promise for impact in this area. In conclusion, current studies provide strong evidence for the role of AI in this area. The next challenge is to make these mainstream, addressing challenges such as integration into the clinic, dataset size and optimal AI algorithm.
Author Contributions
Conceptualization, W.T., C.M., P.M.E. and S.B.; methodology, W.T.; writing—original draft preparation, W.T., G.C., C.M., S.A.T., P.M.E. and S.B.; writing—review and editing, W.T., G.C., C.M., S.A.T., P.M.E. and S.B. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Acknowledgments
W.T. acknowledges PhD funding from an iCASE award by the Engineering and Physical Sciences Research Council and The National Physical Laboratory through the National Measurement System.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Berenguer, C.V.; Pereira, F.; Câmara, J.S.; Pereira, J.A. Underlying Features of Prostate Cancer—Statistics, Risk Factors, and Emerging Methods for Its Diagnosis. Curr. Oncol. 2023, 30, 2300–2321. [Google Scholar] [CrossRef]
- Why Is Early Cancer Diagnosis Important? Available online: https://www.cancerresearchuk.org/about-cancer/cancer-symptoms/why-is-early-diagnosis-important (accessed on 27 November 2023).
- Ioannidou, E.; Moschetta, M.; Shah, S.; Parker, J.S.; Ozturk, M.A.; Pappas-Gogos, G.; Sheriff, M.; Rassy, E.; Boussios, S. Angiogenesis and Anti-Angiogenic Treatment in Prostate Cancer: Mechanisms of Action and Molecular Targets. Int. J. Mol. Sci. 2021, 22, 9926. [Google Scholar] [CrossRef]
- Hofman, M.S.; Lawrentschuk, N.; Francis, R.J.; Tang, C.; Vela, I.; Thomas, P.; Rutherford, N.; Martin, J.M.; Frydenberg, M.; Shakher, R.; et al. proPSMA Study Group Collaborators. Prostate-specific membrane antigen PET-CT in patients with high-risk prostate cancer before curative-intent surgery or radiotherapy (proPSMA): A prospective, randomised, multicentre study. Lancet 2020, 395, 1208–1216. [Google Scholar] [CrossRef]
- Sun, Y.; Reynolds, H.M.; Parameswaran, B.; Wraith, D.; Finnegan, M.E.; Williams, S.; Haworth, A. Multiparametric MRI and radiomics in prostate cancer: A review. Australas. Phys. Eng. Sci. Med. 2019, 42, 3–25. [Google Scholar] [CrossRef]
- Bhattacharya, I.; Khandwala, Y.S.; Vesal, S.; Shao, W.; Yang, Q.; Soerensen, S.J.C.; Fan, R.E.; Ghanouni, P.; Kunder, C.A.; Brooks, J.D.; et al. A review of artificial intelligence in prostate cancer detection on Imaging. Ther. Adv. Urol. 2022, 14, 17562872221128791. [Google Scholar] [CrossRef]
- Bianchi, L.; Schiavina, R.; Borghesi, M.; Ceci, F.; Angiolini, A.; Chessa, F.; Droghetti, M.; Bertaccini, A.; Manferrari, F.; Marcelli, E.; et al. How does 68Ga-prostate-specific membrane antigen positron emission tomography/computed tomography impact the management of patients with prostate cancer recurrence after surgery? Int. J. Urol. 2019, 26, 804–811. [Google Scholar] [CrossRef]
- Fanti, S.; Briganti, A.; Emmett, L.; Fizazi, K.; Gillessen, S.; Goffin, K.; Hadaschik, B.A.; Herrmann, K.; Kunikowska, J.; Maurer, T.; et al. EAU-EANM consensus statements on the role of prostate-specific membrane antigen positron emission tomography/computed tomography in patients with prostate cancer and with respect to [177Lu] Lu-PSMA radioligand therapy. Eur. Urol. Oncol. 2022, 5, 530–536. [Google Scholar] [CrossRef] [PubMed]
- Gordon, L.G.; Elliott, T.M.; Joshi, A.; Williams, E.D.; Vela, I. Exploratory cost-effectiveness analysis of 68 Gallium-PSMA PET/MRI-based imaging in patients with biochemical recurrence of prostate cancer. Clin. Exp. Metastasis 2020, 37, 305–312. [Google Scholar] [CrossRef] [PubMed]
- Aerts, H.J.; Velazquez, E.R.; Leijenaar, R.T.; Parmar, C.; Grossmann, P.; Carvalho, S.; Bussink, J.; Monshouwer, R.; Haibe-Kains, B.; Rietveld, D.; et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 2014, 5, 4006. [Google Scholar] [CrossRef] [PubMed]
- Cook, G.J.; Azad, G.; Owczarczyk, K.; Siddique, M.; Goh, V. Challenges and promises of PET radiomics. Int. J. Radiat. Oncol. Biol. Phys. 2018, 102, 1083–1089. [Google Scholar] [CrossRef] [PubMed]
- Alobaidli, S.; McQuaid, S.; South, C.; Prakash, V.; Evans, P.; Nisbet, A. The role of texture analysis in imaging as an outcome predictor and potential tool in radiotherapy treatment planning. Br. J. Radiol. 2014, 87, 20140369. [Google Scholar] [CrossRef]
- Cusumano, D.; Dinapoli, N.; Boldrini, L.; Chiloiro, G.; Gatta, R.; Masciocchi, C.; Lenkowicz, J.; Casà, C.; Damiani, A.; Azario, L.; et al. Fractal-based radiomic approach to predict complete pathological response after chemo-radiotherapy in rectal cancer. Radiol. Med. 2018, 123, 286–295. [Google Scholar] [CrossRef] [PubMed]
- van Griethuysen, J.J.M.; Fedorov, A.; Parmar, C.; Hosny, A.; Aucoin, N.; Narayan, V.; Beets-Tan, R.G.H.; Fillion-Robin, J.C.; Pieper, S.; Aerts, H.J.W.L. Computational radiomics system to decode the radiographic phenotype. Cancer. Res. 2017, 77, e104–e107. [Google Scholar] [CrossRef] [PubMed]
- Vallières, M.; Kay-Rivest, E.; Perrin, L.J.; Liem, X.; Furstoss, C.; Aerts, H.J.W.L.; Khaouam, N.; Nguyen-Tan, P.F.; Wang, C.S.; Sultanem, K.; et al. Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer. Sci. Rep. 2017, 7, 10117. [Google Scholar] [CrossRef] [PubMed]
- Zanfardino, M.; Franzese, M.; Pane, K.; Cavaliere, C.; Monti, S.; Esposito, G.; Salvatore, M.; Aiello, M. Bringing radiomics into a multi-omics framework for a comprehensive genotype–phenotype characterization of oncological diseases. J. Transl. Med. 2019, 17, 337. [Google Scholar] [CrossRef]
- Maurer, T.; Eiber, M.; Schwaiger, M.; Gschwend, J.E. Current use of PSMA–PET in prostate cancer management. Nat. Rev. Urol. 2016, 13, 226–235. [Google Scholar] [CrossRef] [PubMed]
- Husarik, D.B.; Miralbell, R.; Dubs, M.; John, H.; Giger, O.T.; Gelet, A.; Cservenyàk, T.; Hany, T.F. Evaluation of [18 F]-choline PET/CT for staging and restaging of prostate cancer. Eur. J. Nucl. Med. Mol. Imaging 2008, 35, 253–263. [Google Scholar] [CrossRef]
- Delgadillo, R.; Ford, J.C.; Abramowitz, M.C.; Dal Pra, A.; Pollack, A.; Stoyanova, R. The role of radiomics in prostate cancer radiotherapy. Strahlenther. Onkol. 2020, 196, 900–912. [Google Scholar] [CrossRef] [PubMed]
- Ferro, M.; de Cobelli, O.; Vartolomei, M.D.; Lucarelli, G.; Crocetto, F.; Barone, B.; Sciarra, A.; Del Giudice, F.; Muto, M.; Maggi, M.; et al. Prostate Cancer Radiogenomics-From Imaging to Molecular Characterization. Int. J. Mol. Sci. 2021, 22, 9971. [Google Scholar] [CrossRef]
- Penzkofer, T.; Padhani, A.R.; Turkbey, B.; Haider, M.A.; Huisman, H.; Walz, J.; Salomon, G.; Schoots, I.G.; Richenberg, J.; Villeirs, G.; et al. ESUR/ESUI position paper: Developing artificial intelligence for precision diagnosis of prostate cancer using magnetic resonance Imaging. Eur. Radiol. 2021, 31, 9567–9578. [Google Scholar] [CrossRef]
- Moazemi, S.; Khurshid, Z.; Erle, A.; Lütje, S.; Essler, M.; Schultz, T.; Bundschuh, R.A. Machine Learning Facilitates Hotspot Classification in PSMA-PET/CT with Nuclear Medicine Specialist Accuracy. Diagnostics 2020, 10, 622. [Google Scholar] [CrossRef]
- Erle, A.; Moazemi, S.; Lütje, S.; Essler, M.; Schultz, T.; Bundschuh, R.A. Evaluating a Machine Learning Tool for the Classification of Pathological Uptake in Whole-Body PSMA-PET-CT Scans. Tomography 2021, 7, 301–312. [Google Scholar] [CrossRef]
- Zamboglou, C.; Bettermann, A.S.; Gratzke, C.; Mix, M.; Ruf, J.; Kiefer, S.; Jilg, C.A.; Benndorf, M.; Spohn, S.; Fassbender, T.F.; et al. Uncovering the invisible-prevalence, characteristics, and radiomics feature-based detection of visually undetectable intraprostatic tumor lesions in 68GaPSMA-11 PET images of patients with primary prostate cancer. Eur. J. Nucl. Med. Mol. Imaging 2021, 48, 1987–1997. [Google Scholar] [CrossRef]
- Domachevsky, L.; Goldberg, N.; Bernstine, H.; Nidam, M.; Groshar, D. Quantitative characterisation of clinically significant intra-prostatic cancer by prostate-specific membrane antigen (PSMA) expression and cell density on PSMA-11. Eur. Radiol. 2018, 28, 5275–5283. [Google Scholar] [CrossRef]
- Alongi, P.; Stefano, A.; Comelli, A.; Laudicella, R.; Scalisi, S.; Arnone, G.; Barone, S.; Spada, M.; Purpura, P.; Bartolotta, T.V.; et al. Radiomics analysis of 18F-Choline PET/CT in the prediction of disease outcome in high-risk prostate cancer: An explorative study on machine learning feature classification in 94 patients. Eur. Radiol. 2021, 31, 4595–4605. [Google Scholar] [CrossRef]
- Nioche, C.; Orlhac, F.; Boughdad, S.; Reuzé, S.; Goya-Outi, J.; Robert, C.; Pellot-Barakat, C.; Soussan, M.; Frouin, F.; Buvat, I. LIFEx: A Freeware for Radiomic Feature Calculation in Multimodality Imaging to Accelerate Advances in the Characterization of Tumor Heterogeneity. Cancer Res. 2018, 78, 4786–4789. [Google Scholar] [CrossRef] [PubMed]
- Cysouw, M.C.F.; Jansen, B.H.E.; van de Brug, T.; Oprea-Lager, D.E.; Pfaehler, E.; de Vries, B.M.; van Moorselaar, R.J.A.; Hoekstra, O.S.; Vis, A.N.; Boellaard, R. Machine learning-based analysis of [18F]DCFPyL PET radiomics for risk stratification in primary prostate cancer. Eur. J. Nucl. Med. Mol. Imaging 2021, 48, 340–349. [Google Scholar] [CrossRef] [PubMed]
- Yao, F.; Bian, S.; Zhu, D.; Yuan, Y.; Pan, K.; Pan, Z.; Feng, X.; Tang, K.; Yang, Y. Machine learning-based radiomics for multiple primary prostate cancer biological characteristics prediction with 18F-PSMA-1007 PET: Comparison among different volume segmentation thresholds. Radiol. Med. 2022, 127, 1170–1178. [Google Scholar] [CrossRef] [PubMed]
- Zamboglou, C.; Carles, M.; Fechter, T.; Kiefer, S.; Reichel, K.; Fassbender, T.F.; Bronsert, P.; Koeber, G.; Schilling, O.; Ruf, J.; et al. Radiomic features from PSMA PET for non-invasive intraprostatic tumor discrimination and characterization in patients with intermediate- and high-risk prostate cancer—A comparison study with histology reference. Theranostics 2019, 9, 2595–2605. [Google Scholar] [CrossRef] [PubMed]
- Moazemi, S.; Erle, A.; Lütje, S.; Gaertner, F.C.; Essler, M.; Bundschuh, R.A. Estimating the Potential of Radiomics Features and Radiomics Signature from Pretherapeutic PSMA-PET-CT Scans and Clinical Data for Prediction of Overall Survival When Treated with 177Lu-PSMA. Diagnostics 2021, 11, 186. [Google Scholar] [CrossRef] [PubMed]
- Acar, E.; Leblebici, A.; Ellidokuz, B.E.; Başbınar, Y.; Kaya, G.Ç. Machine learning for differentiating metastatic and completely responded sclerotic bone lesion in prostate cancer: A retrospective radiomics study. Br. J. Radiol. 2019, 92, 20190286. [Google Scholar] [CrossRef] [PubMed]
- Peeken, J.C.; Shouman, M.A.; Kroenke, M.; Rauscher, I.; Maurer, T.; Gschwend, J.E.; Eiber, M.; Combs, S.E. A CT-based radiomics model to detect prostate cancer lymph node metastases in PSMA radioguided surgery patients. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 2968–2977. [Google Scholar] [CrossRef] [PubMed]
- Bosetti, D.G.; Ruinelli, L.; Piliero, M.A.; van der Gaag, L.C.; Pesce, G.A.; Valli, M.; Bosetti, M.; Presilla, S.; Richetti, A.; Deantonio, L. Cone-beam computed tomography-based radiomics in prostate cancer: A mono-institutional study. Strahlenther. Onkol. 2020, 196, 943–951. [Google Scholar] [CrossRef]
- Osman, S.O.S.; Leijenaar, R.T.H.; Cole, A.J.; Lyons, C.A.; Hounsell, A.R.; Prise, K.M.; O’Sullivan, J.M.; Lambin, P.; McGarry, C.K.; Jain, S. Computed Tomography-based Radiomics for Risk Stratification in Prostate Cancer. Int. J. Radiat. Oncol. Biol. Phys. 2019, 105, 448–456. [Google Scholar] [CrossRef] [PubMed]
- Mostafaei, S.; Abdollahi, H.; Kazempour Dehkordi, S.; Shiri, I.; Razzaghdoust, A.; Zoljalali Moghaddam, S.H.; Saadipoor, A.; Koosha, F.; Cheraghi, S.; Mahdavi, S.R. CT imaging markers to improve radiation toxicity prediction in prostate cancer radiotherapy by stacking regression algorithm. Radiol. Med. 2020, 125, 87–97. [Google Scholar] [CrossRef] [PubMed]
- Tanadini-Lang, S.; Bogowicz, M.; Veit-Haibach, P.; Huellner, M.; Pauli, C.; Shukla, V.; Guckenberger, M.; Riesterer, O. Exploratory Radiomics in Computed Tomography Perfusion of Prostate Cancer. Anticancer Res. 2018, 38, 685–690. [Google Scholar] [PubMed]
- Thomas, H.M.; Wang, H.Y.; Varghese, A.J.; Donovan, E.M.; South, C.P.; Saxby, H.; Nisbet, A.; Prakash, V.; Sasidharan, B.K.; Pavamani, S.P.; et al. Reproducibility in Radiomics: A Comparison of Feature Extraction Methods and Two Independent Datasets. Appl. Sci. 2023, 13, 7291. [Google Scholar] [CrossRef]
- Chalmers, I.; Glasziou, P. Avoidable waste in the production and reporting of research evidence. Lancet 2009, 374, 86–89. [Google Scholar] [CrossRef]
- Fukushima, K. Neocognitron: A self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 1980, 36, 193–202. [Google Scholar] [CrossRef]
- Hartenstein, A.; Lübbe, F.; Baur, A.D.J.; Rudolph, M.M.; Furth, C.; Brenner, W.; Amthauer, H.; Hamm, B.; Makowski, M.; Penzkofer, T. Prostate Cancer Nodal Staging: Using Deep Learning to Predict 68Ga-PSMA-Positivity from CT Imaging Alone. Sci. Rep. 2020, 10, 3398. [Google Scholar] [CrossRef]
- Xu, D.; Ma, M.; Cao, M.; Kishan, A.U.; Nickols, N.G.; Scalzo, F.; Sheng, K. Mask R-CNN assisted 2.5D object detection pipeline of 68Ga-PSMA-11 PET/CT-positive metastatic pelvic lymph node after radical prostatectomy from solely CT Imaging. Sci. Rep. 2023, 13, 1696. [Google Scholar] [CrossRef] [PubMed]
- Borrelli, P.; Larsson, M.; Ulén, J.; Enqvist, O.; Trägårdh, E.; Poulsen, M.H.; Mortensen, M.A.; Kjölhede, H.; Høilund-Carlsen, P.F.; Edenbrandt, L. Artificial intelligence-based detection of lymph node metastases by PET/CT predicts prostate cancer-specific survival. Clin. Physiol. Funct. Imaging 2021, 41, 62–67. [Google Scholar] [CrossRef] [PubMed]
- Ntakolia, C.; Diamantis, D.E.; Papandrianos, N.; Moustakidis, S.; Papageorgiou, E.I. A Lightweight Convolutional Neural Network Architecture Applied for Bone Metastasis Classification in Nuclear Medicine: A Case Study on Prostate Cancer Patients. Healthcare 2020, 8, 493. [Google Scholar] [CrossRef] [PubMed]
- Capobianco, N.; Sibille, L.; Chantadisai, M.; Gafita, A.; Langbein, T.; Platsch, G.; Solari, E.L.; Shah, V.; Spottiswoode, B.; Eiber, M.; et al. Whole-body uptake classification and prostate cancer staging in 68Ga-PSMA-11 PET/CT using dual-tracer learning. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 517–526. [Google Scholar] [CrossRef]
- Trägårdh, E.; Enqvist, O.; Ulén, J.; Jögi, J.; Bitzén, U.; Hedeer, F.; Valind, K.; Garpered, S.; Hvittfeldt, E.; Borrelli, P.; et al. Freely Available, Fully Automated AI-Based Analysis of Primary Tumour and Metastases of Prostate Cancer in Whole-Body [18F]-PSMA-1007 PET-CT. Diagnostics 2022, 12, 2101. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.J.; Yang, H.; Franc, B.L.; Iagaru, A.; Davidzon, G.A. Deep learning detection of prostate cancer recurrence with 18F-FACBC (fluciclovine, Axumin®) positron emission tomography. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 2992–2997. [Google Scholar] [CrossRef]
- Kostyszyn, D.; Fechter, T.; Bartl, N.; Grosu, A.L.; Gratzke, C.; Sigle, A.; Mix, M.; Ruf, J.; Fassbender, T.F.; Kiefer, S.; et al. Intraprostatic Tumor Segmentation on PSMA PET Images in Patients with Primary Prostate Cancer with a Convolutional Neural Network. J. Nucl. Med. 2021, 62, 823–828. [Google Scholar] [CrossRef]
- Wang, T.; Lei, Y.; Akin-Akintayo, O.O.; Ojo, O.A.; Akintayo, A.A.; Curran, W.J.; Liu, T.; Schuster, D.M.; Yang, X. Prostate and tumor segmentation on PET/CT using Dual Mask R-CNN. In Proceedings of the Medical Imaging 2021: Biomedical Applications in Molecular, Structural, and Functional Imaging, Online, 15 February 2021; Volume 11600, pp. 185–190. [Google Scholar]
- Matkovic, L.A.; Wang, T.; Lei, Y.; Akin-Akintayo, O.O.; Abiodun Ojo, O.A.; Akintayo, A.A.; Roper, J.; Bradley, J.D.; Liu, T.; Schuster, D.M.; et al. Prostate and dominant intraprostatic lesion segmentation on PET/CT using cascaded regional-net. Phys. Med. Biol. 2021, 66, 245006. [Google Scholar] [CrossRef]
- Rainio, O.; Lahti, J.; Anttinen, M.; Ettala, O.; Seppänen, M.; Boström, P.; Kemppainen, J.; Klén, R. New method of using a convolutional neural network for 2D intraprostatic tumor segmentation from PET images. Res. Biomed. Eng. 2023, 39, 905–913. [Google Scholar] [CrossRef]
- Holzschuh, J.C.; Mix, M.; Ruf, J.; Hölscher, T.; Kotzerke, J.; Vrachimis, A.; Doolan, P.; Ilhan, H.; Marinescu, I.M.; Spohn, S.K.B.; et al. Deep learning based automated delineation of the intraprostatic gross tumour volume in PSMA-PET for patients with primary prostate cancer. Radiother. Oncol. 2023, 188, 109774. [Google Scholar] [CrossRef]
- Zhao, Y.; Gafita, A.; Tetteh, G.; Haupt, F.; Afshar-Oromieh, A.; Menze, B.; Eiber, M.; Rominger, A.; Shi, K. Deep Neural Network for Automatic Characterization of Lesions on 68Ga-PSMA PET/CT Images. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2019, 2019, 951–954. [Google Scholar]
- Ghezzo, S.; Mongardi, S.; Bezzi, C.; Samanes Gajate, A.M.; Preza, E.; Gotuzzo, I.; Baldassi, F.; Jonghi-Lavarini, L.; Neri, I.; Russo, T.; et al. External validation of a convolutional neural network for the automatic segmentation of intraprostatic tumor lesions on 68Ga-PSMA PET images. Front. Med. 2023, 10, 1133269. [Google Scholar] [CrossRef]
- Montagnon, E.; Cerny, M.; Cadrin-Chênevert, A.; Hamilton, V.; Derennes, T.; Ilinca, A.; Vandenbroucke-Menu, F.; Turcotte, S.; Kadoury, S.; Tang, A. Deep learning workflow in radiology: A primer. Insights Imaging 2020, 11, 22. [Google Scholar] [CrossRef]
- Kohli, M.; Prevedello, L.M.; Filice, R.W.; Geis, J.R. Implementing Machine Learning in Radiology Practice and Research. Am. J. Roentgenol. 2017, 208, 754–760. [Google Scholar] [CrossRef]
- Mougiakakou, S.G.; Valavanis, I.K.; Nikita, A.; Nikita, K.S. Differential diagnosis of CT focal liver lesions using texture features, feature selection and ensemble driven classifiers. Artif. Intell. Med. 2007, 41, 25–37. [Google Scholar] [CrossRef]
- Li, B.; Oka, R.; Xuan, P.; Yoshimura, Y.; Nakaguchi, T. Robust multi-modal prostate cancer classification via feature autoencoder and dual attention. Inform. Med. Unlocked. 2022, 30, 100923. [Google Scholar] [CrossRef]
- Chen, E.L.; Chung, P.C.; Chen, C.L.; Tsai, H.M.; Chang, C.I. An automatic diagnostic system for CT liver image classification. IEEE Trans. Biomed. Eng. 1998, 45, 783–794. [Google Scholar] [CrossRef]
- Wildeboer, R.R.; van Sloun, R.J.G.; Wijkstra, H.; Mischi, M. Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods. Comput. Methods Programs Biomed. 2020, 189, 105316. [Google Scholar] [CrossRef]
- Mongan, J.; Moy, L.; Kahn, C.E., Jr. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiol. Artif. Intell. 2020, 2, e200029. [Google Scholar] [CrossRef]
- Yao, J.; Miller, M.; Franaszek, M.; Summers, R.M. Colonic polyp segmentation in CT colonography-based on fuzzy clustering and deformable models. IEEE Trans. Med. Imaging 2004, 23, 1344–1352. [Google Scholar] [CrossRef]
- Leung, K.H.; Rowe, S.P.; Leal, J.P.; Ashrafinia, S.; Sadaghiani, M.S.; Chung, H.W.; Dalaie, P.; Tulbah, R.; Yin, Y.; VanDenBerg, R.; et al. Deep learning and radiomics framework for PSMA-RADS classification of prostate cancer on PSMA PET. EJNMMI Res. 2022, 12, 76. [Google Scholar] [CrossRef]
- Cheng, P.M.; Montagnon, E.; Yamashita, R.; Pan, I.; Cadrin-Chênevert, A.; Perdigón Romero, F.; Chartrand, G.; Kadoury, S.; Tang, A. Deep Learning: An Update for Radiologists. Radiographics 2021, 41, 1427–1445. [Google Scholar] [CrossRef]
- Li, Z.; Fang, J.; Qiu, R.; Gong, H.; Zhang, W.; Li, L.; Jiang, J. CDA-Net: A contrastive deep adversarial model for prostate cancer segmentation in MRI images. Biomed. Signal Process. Control 2023, 83, 104622. [Google Scholar] [CrossRef]
- Abdelmaksoud, I.R.; Shalaby, A.; Mahmoud, A.; Elmogy, M.; Aboelfetouh, A.; Abou El-Ghar, M.; El-Melegy, M.; Alghamdi, N.S.; El-Baz, A. Precise Identification of Prostate Cancer from DWI Using Transfer Learning. Sensors 2021, 21, 3664. [Google Scholar] [CrossRef]
- Chen, Q.; Xu, X.; Hu, S.; Li, X.; Zou, Q.; Li, Y. A transfer learning approach for classification of clinical significant prostate cancers from mpMRI scans. In Proceedings of the Medical Imaging 2017: Computer-Aided Diagnosis, Orlando, FL, USA, 16 March 2017; Volume 10134, pp. 1154–1157. [Google Scholar]
- Rai, T.; Morisi, A.; Bacci, B.; Bacon, N.J.; Thomas, S.A.; La Ragione, R.M.; Bober, M.; Wells, K. Can ImageNet feature maps be applied to small histopathological datasets for the classification of breast cancer metastatic tissue in whole slide images? In Proceedings of the Medical Imaging 2019: Digital Pathology, San Diego, CA, USA, 18 March 2019; Volume 10956, pp. 191–200. [Google Scholar]
- Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE. Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef]
- Thomas, S.A. Enhanced Transfer Learning Through Medical Imaging and Patient Demographic Data Fusion. arXiv 2011, arXiv:2111.14388. [Google Scholar]
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the Opportunities and Risks of Foundation Models. arXiv 2022, arXiv:2108.07258. [Google Scholar]
- Zhang, S.; Metaxas, D. On the challenges and perspectives of foundation models for medical image analysis. Med. Image Anal. 2024, 91, 102996. [Google Scholar] [CrossRef]
- Willemink, M.J.; Roth, H.R.; Sandfort, V. Toward Foundational Deep Learning Models for Medical Imaging in the New Era of Transformer Networks. Radiol. Artif. Intell. 2022, 4, e210284. [Google Scholar] [CrossRef]
- Steinberg, E.; Jung, K.; Fries, J.A.; Corbin, C.K.; Pfohl, S.R.; Shah, N.H. Language models are an effective representation learning technique for electronic health record data. J. Biomed. Inform. 2021, 113, 103637. [Google Scholar] [CrossRef]
- Yang, X.; Chen, A.; PourNejatian, N.; Shin, H.C.; Smith, K.E.; Parisien, C.; Compas, C.; Martin, C.; Costa, A.B.; Flores, M.G.; et al. A large language model for electronic health records. NPJ Digit. Med. 2022, 5, 194. [Google Scholar] [CrossRef]
- Tiu, E.; Talius, E.; Patel, P.; Langlotz, C.P.; Ng, A.Y.; Rajpurkar, P. Expert-level detection of pathologies from unannotated chest X-ray images via self-supervised learning. Nat. Biomed. Eng. 2022, 6, 1399–1406. [Google Scholar] [CrossRef]
- Moor, M.; Banerjee, O.; Abad, Z.S.H.; Krumholz, H.M.; Leskovec, J.; Topol, E.J.; Rajpurkar, P. Foundation models for generalist medical artificial intelligence. Nature 2023, 616, 259–265. [Google Scholar] [CrossRef]
- Azad, B.; Azad, R.; Eskandari, S.; Bozorgpour, A.; Kazerouni, A.; Rekik, I.; Merhof, D. Foundational Models in Medical Imaging: A Comprehensive Survey and Future Vision. arXiv 2023, arXiv:2310.18689. [Google Scholar]
- Van Engelen, J.E.; Hoos, H.H. A survey on semi-supervised learning. Mach. Learn. 2020, 109, 373–440. [Google Scholar] [CrossRef]
- Arazo, E.; Ortego, D.; Albert, P.; O’Connor, N.E.; McGuinness, K. Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning. arXiv 2020, arXiv:1908.02983. [Google Scholar]
- Berthelot, D.; Carlini, N.; Goodfellow, I.; Papernot, N.; Oliver, A.; Raffel, C.A. MixMatch: A Holistic Approach to Semi-Supervised Learning. arXiv 2019, arXiv:1905.02249. [Google Scholar]
- Sohn, K.; Berthelot, D.; Li, C.-L.; Zhang, Z.; Carlini, N.; Cubuk, E.D.; Kurakin, A.; Zhang, H.; Raffel, C. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. arXiv 2020, arXiv:2001.07685. [Google Scholar]
- Laine, S.; Aila, T. Temporal Ensembling for Semi-Supervised Learning. arXiv 2017, arXiv:1610.02242. [Google Scholar]
- Polyak, B.T.; Juditsky, A. Acceleration of Stochastic Approximation by Averaging. SIAM J. Control. Optim. 1992, 30, 838–855. [Google Scholar] [CrossRef]
- Tarvainen, A.; Valpola, H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. arXiv 2018, arXiv:1703.01780. [Google Scholar]
- Yu, L.; Wang, S.; Li, X.; Fu, C.-W.; Heng, P.-A. Uncertainty-Aware Self-ensembling Model for Semi-supervised 3D Left Atrium Segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11765. [Google Scholar]
- Wang, K.; Zhan, B.; Zu, C.; Wu, X.; Zhou, J.; Zhou, L.; Wang, Y. Tripled-Uncertainty Guided Mean Teacher Model for Semi-supervised Medical Image Segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part II. pp. 450–460. [Google Scholar]
- Xia, Y.; Yang, D.; Yu, Z.; Liu, F.; Cai, J.; Yu, L.; Zhu, Z.; Xu, D.; Yuille, A.; Roth, H. Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation. Med. Image Anal. 2020, 65, 101766. [Google Scholar] [CrossRef] [PubMed]
- Luo, L.; Yu, L.; Chen, H.; Liu, Q.; Wang, X.; Xu, J.; Heng, P.A. Deep Mining External Imperfect Data for Chest X-Ray Disease Screening. IEEE. Trans. Med. Imaging 2020, 39, 3583–3594. [Google Scholar] [CrossRef]
- Luo, X.; Liao, W.; Chen, J.; Song, T.; Chen, Y.; Zhang, S.; Chen, N.; Wang, G.; Zhang, S. Efficient Semi-supervised Gross Target Volume of Nasopharyngeal Carcinoma Segmentation via Uncertainty Rectified Pyramid Consistency. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science; De Bruijne, M., Ed.; Springer: Cham, Switzerland, 2021; Volume 12902. [Google Scholar]
- Wang, T.; Lu, J.; Lai, Z.; Wen, J.; Kong, H. Uncertainty-Guided Pixel Contrastive Learning for Semi-Supervised Medical Image Segmentation. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 19–27 August 2021; pp. 1444–1450. [Google Scholar]
- Mehrtash, A.; Wells, W.M.; Tempany, C.M.; Abolmaesumi, P.; Kapur, T. Confidence Calibration and Predictive Uncertainty Estimation for Deep Medical Image Segmentation. IEEE. Trans. Med. Imaging 2020, 39, 3868–3878. [Google Scholar] [CrossRef]
- Xu, C.; Yang, Y.; Xia, Z.; Wang, B.; Zhang, D.; Zhang, Y.; Zhao, S. Dual Uncertainty-Guided Mixing Consistency for Semi-Supervised 3D Medical Image Segmentation. IEEE Trans. Big Data. 2023, 9, 1156–1170. [Google Scholar] [CrossRef]
- Xiang, J.; Qiu, P.; Yang, Y. FUSSNet: Fusing Two Sources of Uncertainty for Semi-supervised Medical Image Segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science; Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S., Eds.; Springer: Cham, Switzerland, 2022; Volume 13438. [Google Scholar]
- Ishida, T.; Niu, G.; Hu, W.; Sugiyama, M. Learning from Complementary Labels. arXiv 2017, arXiv:1705.07541. [Google Scholar]
- Kim, Y.; Yim, J.; Yun, J.; Kim, J. NLNL: Negative Learning for Noisy Labels. arXiv 2019, arXiv:1908.07387. [Google Scholar]
- Rizve, M.N.; Duarte, K.; Rawat, Y.S.; Shah, M. In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning. arXiv 2021, arXiv:2101.06329. [Google Scholar]
- Kim, Y.; Yun, J.; Shon, H.; Kim, J. Joint Negative and Positive Learning for Noisy Labels. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; pp. 9437–9446. [Google Scholar]
- Zheng, H.; Lin, L.; Hu, H.; Zhang, Q.; Chen, Q.; Iwamoto, Y.; Han, X.; Chen, Y.-W.; Tong, R.; Wu, J. Semi-supervised Segmentation of Liver Using Adversarial Learning with Deep Atlas Prior. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11769. [Google Scholar]
- Li, S.; Zhang, C.; He, X. Shape-aware semi-supervised 3D semantic segmentation for medical images. In Proceedings of the Book: Medical Image Computing and Computer Assisted Intervention—MICCAI 2020, 23rd International Conference, Lima, Peru, 4–8 October 2020; Proceedings, Part I. pp. 552–561. [Google Scholar]
- Wang, P.; Peng, J.; Pedersoli, M.; Zhou, Y.; Zhang, C.; Desrosiers, C. CAT: Constrained Adversarial Training for Anatomically-Plausible Semi-Supervised Segmentation. IEEE. Trans. Med. Imaging 2023, 42, 2146–2161. [Google Scholar] [CrossRef]
- Ouali, Y.; Hudelot, C.; Tami, M. Semi-Supervised Semantic Segmentation with Cross-Consistency Training. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 19 June 2020; pp. 12671–12681. [Google Scholar]
- Liu, Y.; Tian, Y.; Chen, Y.; Liu, F.; Belagiannis, V.; Carneiro, G. Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 4248–4257. [Google Scholar]
- Wu, Y.; Xu, M.; Ge, Z.; Cai, J.; Zhang, L. Semi-supervised Left Atrium Segmentation with Mutual Consistency Training. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2021; Volume 12902. [Google Scholar]
- Wu, Y.; Ge, Z.; Zhang, D.; Xu, M.; Zhang, L.; Xia, Y.; Cai, J. Mutual consistency learning for semi-supervised medical image segmentation. Med. Image. Anal. 2022, 81, 102530. [Google Scholar] [CrossRef]
- Lai, X.; Tian, Z.; Jiang, L.; Liu, S.; Zhao, H.; Wang, L.; Jia, J. Semi-supervised Semantic Segmentation with Directional Context-aware Consistency. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 1205–1214. [Google Scholar]
- Apiparakoon, T.; Rakratchatakul, N.; Chantadisai, M.; Vutrapongwatana, U.; Kingpetch, K.; Sirisalipoch, S.; Rakvongthai, Y.; Chaiwatanarat, T.; Chuangsuwanich, E. MaligNet: Semisupervised Learning for Bone Lesion Instance Segmentation Using Bone Scintigraphy. IEEE Access 2020, 27047–27066. [Google Scholar] [CrossRef]
- Moreau, N.; Rousseau, C.; Fourcade, C.; Santini, G.; Brennan, A.; Ferrer, L.; Lacombe, M.; Guillerminet, C.; Colombié, M.; Jézéquel, P.; et al. Automatic Segmentation of Metastatic Breast Cancer Lesions on 18F-FDG PET/CT Longitudinal Acquisitions for Treatment Response Assessment. Cancers 2021, 14, 101. [Google Scholar] [CrossRef] [PubMed]
- Alzubaidi, L.; Al-Amidie, M.; Al-Asadi, A.; Humaidi, A.J.; Al-Shamma, O.; Fadhel, M.A.; Zhang, J.; Santamaría, J.; Duan, Y. Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data. Cancers 2021, 13, 1590. [Google Scholar] [CrossRef] [PubMed]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Xue, S.; Gafita, A.; Afshar-Oromieh, A.; Eiber, M.; Rominger, A.; Shi, K. Voxel-wise Prediction of Post-therapy Dosimetry for 177Lu-PSMA I&T Therapy using Deep Learning. J. Nucl. Med. 2020, 61, 1424. [Google Scholar]
- Murakami, Y.; Magome, T.; Matsumoto, K.; Sato, T.; Yoshioka, Y.; Oguchi, M. Fully automated dose prediction using generative adversarial networks in prostate cancer patients. PLoS ONE 2020, 15, e0232697. [Google Scholar] [CrossRef] [PubMed]
- Sultana, S.; Robinson, A.; Song, D.Y.; Lee, J. CNN-based hierarchical coarse-to-fine segmentation of pelvic CT images for prostate cancer radiotherapy. Proc. SPIE Int. Soc. Opt. Eng. 2020, 11315, 113151I. [Google Scholar] [PubMed]
- Zhang, Z.; Zhao, T.; Gay, H.; Zhang, W.; Sun, B. ARPM-net: A novel CNN-based adversarial method with Markov random field enhancement for prostate and organs at risk segmentation in pelvic CT images. Med. Phys. 2021, 48, 227–237. [Google Scholar] [CrossRef]
- Heilemann, G.; Matthewman, M.; Kuess, P.; Goldner, G.; Widder, J.; Georg, D.; Zimmermann, L. Can Generative Adversarial Networks help to overcome the limited data problem in segmentation? Z. Med. Phys. 2022, 32, 361–368. [Google Scholar] [CrossRef]
- Chan, Y.; Li, M.; Parodi, K.; Belka, C.; Landry, G.; Kurz, C. Feasibility of CycleGAN enhanced low dose CBCT imaging for prostate radiotherapy dose calculation. Phys. Med. Biol. 2023, 68, 105014. [Google Scholar] [CrossRef]
- Pan, S.; Wang, T.; Qiu, R.L.J.; Axente, M.; Chang, C.W.; Peng, J.; Patel, A.B.; Shelton, J.; Patel, S.A.; Roper, J.; et al. 2D medical image synthesis using transformer-based denoising diffusion probabilistic model. Phys. Med. Biol. 2023, 68, 105004. [Google Scholar] [CrossRef] [PubMed]
- Belue, M.J.; Turkbey, B. Tasks for artificial intelligence in prostate MRI. Eur. Radiol. Exp. 2022, 6, 33. [Google Scholar] [CrossRef] [PubMed]
- Harmon, S.A.; Tuncer, S.; Sanford, T.; Choyke, P.L.; Türkbey, B. Artificial intelligence at the intersection of pathology and radiology in prostate cancer. Diagn. Interv. Radiol. 2019, 25, 183–188. [Google Scholar] [CrossRef] [PubMed]
- Rabaan, A.A.; Bakhrebah, M.A.; AlSaihati, H.; Alhumaid, S.; Alsubki, R.A.; Turkistani, S.A.; Al-Abdulhadi, S.; Aldawood, Y.; Alsaleh, A.A.; Alhashem, Y.N.; et al. Artificial Intelligence for Clinical Diagnosis and Treatment of Prostate Cancer. Cancers 2022, 14, 5595. [Google Scholar] [CrossRef] [PubMed]
- European Society of Radiology (ESR). Current practical experience with artificial intelligence in clinical radiology: A survey of the European Society of Radiology. Insights Imaging 2022, 13, 107. [Google Scholar] [CrossRef]
- Zhang, J.; Zhang, Z.M. Ethics and governance of trustworthy medical artificial intelligence. BMC. Med. Inform. Decis. Mak. 2023, 23, 7. [Google Scholar] [CrossRef]
- Hulsen, T. An overview of publicly available patient-centered prostate cancer datasets. Transl. Androl. Urol. 2019, 8, S64–S77. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).