1. Background
Quantitative image analysis has emerged as a valuable tool in radiation oncology, enabling the extraction of imaging features to predict treatment outcomes, assess tumor characteristics, and guide clinical decision-making. The integration of dosimetric data with imaging-based predictive models has further advanced this field, offering insights into tumor biology and patient responses to therapy [
1,
2]. Additionally, image-based analysis has shown promise in the diagnosis of tumor recurrence, particularly after radiotherapy, by identifying subtle imaging features that differentiate true progression from pseudo-progression or post-treatment changes. Predictive modeling approaches have been explored for recurrence assessment in various cancers, including glioblastoma and head and neck cancers, by integrating imaging biomarkers with clinical and treatment data. For example, the texture analysis of magnetic resonance imaging and positron emission tomography/computed tomography (PET/CT) images has been used to distinguish residual or recurrent disease from treatment-induced necrosis, demonstrating encouraging accuracy in several studies [
3].
Generalizability refers to the ability of a predictive model to maintain reliability and accuracy when applied to data from a new, independent cohort of patients [
4,
5]. If a model is not generalizable across different datasets, it may capture biases introduced by data generation and processing protocols rather than learning meaningful relationships between features and clinical outcomes. Studies have shown that when generalizability is not addressed, predictive models may exhibit performance degradation over time as healthcare practices and patient characteristics evolve [
6].
The most effective way to improve model generalizability is to use large-scale multi-institutional datasets. However, while multi-institutional datasets are highly desirable for enhancing model performance, it is essential to take precautions to prevent domain dependency when using them. Healthcare settings can vary in terms of unobserved confounders, deployment environments, protocols, and data drift over time [
7], resulting in a domain dependency that affects the output of a predictive model [
8,
9]. In radiation oncology, domain dependency means that variations in imaging parameters impact the robustness and performance of the predictive models built upon them. Domain dependency can have the same impact on prediction performance as low generalizability. Although domain dependency in a medical dataset, especially imaging data, can usually be prevented, some parameters, such as the collecting instruments, are part of the training data characteristics, and it is not possible to remove them. Thus, those parameters must be incorporated into the training process as features.
Radiation oncology is well positioned to benefit from advancements in predictive modeling, as these models can provide clinical insights by integrating complex treatment data, particularly radiation dose representation [
10]. One area where predictive modeling has shown promise is in assessing treatment outcomes and evaluating the risk of radiation-induced complications. When developing models designed to predict treatment response across various medical fields, including radiation oncology, the quality and quantity of training data play a crucial role in ensuring model accuracy and reliability.
The accurate calculation of radiation dose in radiation oncology treatments plays a significant role in determining treatment response, as variations in dose distribution can lead to different clinical outcomes. The precise dose calculation and delivery are crucial factors in achieving optimal tumor control while minimizing damage to surrounding healthy tissues [
11]. Other studies have highlighted the direct link between dose calculation accuracy and treatment response, underlining the importance of employing sophisticated algorithms and techniques to ensure precise dose delivery in radiation oncology treatments and optimize patient outcomes [
11,
12].
In this study, we evaluated the impact of the type of treatment planning system used on the dose calculation to determine if this parameter is important to consider when using a radiotherapy patient cohort for developing a predictive model. By conducting this comparative analysis, we aimed to comprehensively assess the impact of treatment planning system selection on dose calculations in head and neck radiation therapy planning. To minimize the effect of different dose calculation algorithms, we used two treatment planning systems with similar dose calculation algorithms. Our findings provide valuable insights into the potential differences and implications for clinical decision-making models when employing different treatment planning systems in radiotherapy practice.
2. Methods
In this study, we comprehensively compared the dose calculation algorithms (DCAs) of two widely used treatment planning systems, Pinnacle 9.10 (Philips Radiation Oncology Systems, Fitchburg, WI, USA) and RayStation 11 (RaySearch Medical Laboratories AB, Stockholm, Sweden). Both treatment planning systems are specifically designed for radiation therapy treatment planning and employ a collapsed cone convolution—type DCA.
A patient cohort consisting of 19 individuals with standard head and neck cancer treatment plans was included in this study following approval from the Institutional Review Board. The patients were chosen to represent a diverse range of tumor characteristics and anatomical variations commonly encountered in clinical practice.
To investigate the impact on the dose calculations of different dose calculation grid sizes, two grid size options were employed: 1 mm and 3 mm. The smaller grid size allows for higher spatial resolution, potentially capturing finer details, while the larger grid size reduces computation time without sacrificing overall accuracy.
Additionally, two different CT density curves were utilized to assess their influence on dose calculations. CT density curves reflect the relationship between Hounsfield unit (HU) values from the CT scan and tissue density. By applying different curves, variations in tissue density mapping can be evaluated, which may affect dose calculation accuracy.
For each patient, the previously generated Pinnacle treatment plan was transferred to RayStation for dose recalculation. This approach ensured a direct comparison between the two treatment planning systems, with consistent patient anatomy and treatment parameters.
To minimize discrepancies in the delineation of regions of interest (ROIs) between the two systems, special care was taken in the definition of treatment volumes. The external contour in RayStation was extended to encompass the patient support structures, aligning with the definition used in Pinnacle. This ensured consistency in the ROI delineation process and minimized potential differences resulting from variations in contouring.
Various dose–volume histogram (DVH) parameters were extracted and analyzed to assess the dose calculation differences between the two treatment planning systems. DVH provides information about the distribution of radiation dose within specific structures. In this study, we focused on the following parameters:
D99%: The dose received by 99% of the volume of a specific organ or tumor. It represents an extremely low dose threshold within the structure.
D98%: The dose received by 98% of the volume of a specific organ or tumor. It represents a very low dose threshold within the structure.
D95%: The dose received by 95% of the volume of a specific organ or tumor. It represents a low dose threshold within the structure.
D50%: The dose received by 50% of the volume of a specific organ or tumor. It represents an average dose threshold within the structure.
D2%: The dose received by 2% of the volume of a specific organ or tumor. It represents a very high dose threshold within the structure.
D1%: The dose received by 1% of the volume of a specific organ or tumor. It represents an extremely high dose threshold within the structure.
These parameters provide critical insights into the delivered dose distribution within organs-at-risk, such as the left and right parotids, larynx, spinal cord, and brainstem, and the gross tumor volume (GTV). By analyzing these parameters, we can evaluate the potential differences in dose calculations between the two treatment planning systems.
3. Results and Discussion
The calculated dose differences (in cGy) between Pinnacle and RayStation, using a 3 mm dose grid and CT curve type 1, for the studied ROIs are visualized in
Figure 1. The minimum and maximum dose values are reported for each ROI. In the brainstem, the dose differences ranged from a minimum of 0.638 cGy to a maximum of 67.347 cGy for the different dose parameters (D1, D2, D50, D95, D98, and D99). Similarly, for the GTV, the dose differences varied from 1.442 cGy to 170.040 cGy across different dose parameters. The left and right parotids showed dose differences ranging from 0.959 cGy to 173.283 cGy and from 6.874 cGy to 65.198 cGy, respectively. The spinal cord exhibited dose differences between 1.477 cGy and 40.624 cGy.
When the dose grid size was changed to 1 mm, the calculated dose differences between Pinnacle and RayStation again exhibited noticeable variations.
Figure 2 provides a visual representation of the minimum and maximum dose values for each ROI using the 1 mm grid size and CT curve type 1. In the brainstem, the dose differences ranged from a minimum of 1.247 cGy to a maximum of 73.276 cGy for the different dose parameters. Similarly, for the GTV, the dose differences varied from 7.700 cGy to 191.767 cGy across the different dose parameters. The left and right parotids exhibited dose differences ranging from 2.160 cGy to 172.008 cGy and 5.089 cGy to 82.167 cGy, respectively. The spinal cord showed dose differences between 2.695 cGy and 41.875 cGy. These results clearly demonstrate the impact of changing the dose grid size on the calculated dose differences between the two treatment planning systems. The differences observed for each ROI and dose parameter indicate that the selection of the dose grid size can significantly influence the accuracy of the dose calculation.
To investigate the impact of changing the dose grid size on a single treatment planning system, specifically RayStation, the grid size was adjusted from 1 mm to 3 mm. The resulting dose differences between the 3 mm and 1 bmm grid sizes demonstrate the minimum and maximum dose values for each ROI (
Figure 3). For the brainstem, the dose differences ranged from a minimum of 0.733 cGy to a maximum of 100.841 cGy for the different dose parameters (D1, D2, D50, D95, D98, and D99). Similarly, for the GTV, the dose differences varied from 1.338 cGy to 150.044 cGy across the dose parameters. The left and right parotids exhibited dose differences ranging from 1.156 cGy to 25.139 cGy and 0.147 cGy to 79.911 cGy, respectively. The spinal cord showed dose differences between 3.512 cGy and 286.363 cGy. These results highlight the influence of changing the dose grid size solely within the RayStation treatment planning system. The observed variations for each ROI and dose parameter underscore the sensitivity of dose calculations to the grid size selection.
Similarly, the dose grid size was modified from 1 mm to 3 mm within the Pinnacle treatment planning system. The resulting dose differences between the 3 mm and 1 mm grid sizes illustrate the minimum and maximum dose values for each ROI.
Figure 4 provides a visual representation of these changes. For the brainstem, the dose differences ranged from a minimum of 1.073 cGy to a maximum of 52.947 cGy for the different dose parameters (D1, D2, D50, D95, D98, and D99). In the case of the GTV, the dose differences varied from 2.893 cGy to 61.492 cGy across the dose parameters. The left and right parotids exhibited dose differences ranging from 0.915 cGy to 11.839 cGy and 1.538 cGy to 43.198 cGy, respectively. The spinal cord showed dose differences between 0.432 cGy and 260 cGy.
These results highlight the influence of changing the dose grid size exclusively within the Pinnacle treatment planning system. The observed variations for each ROI and dose parameter emphasize the sensitivity of dose calculations to the grid size selection.
A comparison of the dose differences between the RayStation and Pinnacle treatment planning systems indicated that the variance in dose grid size had a larger impact on RayStation than on Pinnacle doses.
Lastly, to evaluate the impact of the CT curve on dose calculations within RayStation, the dose differences between type 1 and type 2 CT curves were analyzed (
Figure 5). For the brainstem, the dose differences ranged from a minimum of 5.692 cGy to a maximum of 58.823 cGy across various dose parameters (D1, D2, D50, D95, D98, and D99). Similarly, the GTV showed dose differences ranging from 9.101 cGy to 107.235 cGy. The left and right parotids exhibited dose differences ranging from 1.080 cGy to 3.301 cGy and 1.668 cGy to 7.280 cGy, respectively. The spinal cord displayed dose differences between 0.012 cGy and 7.041 cGy.
Quantitative image analysis plays an increasingly important role in addressing challenges associated with distinguishing true progression from pseudo-progression following radiotherapy [
13,
14,
15]. Advanced imaging techniques, including texture analysis and feature extraction, have been utilized to uncover patterns in imaging data that may not be visually apparent. These techniques are particularly valuable in integrating multi-modal data, such as genomics, histopathology, and imaging, along with treatment-related parameters like radiotherapy treatment planning [
16,
17,
18,
19], and hold great promise for improving the accuracy and robustness of recurrence prediction models [
20,
21]. Studies have demonstrated that combining imaging-based machine learning with genomic markers and radiotherapy dose distribution improves the performance of a clinical decision-making model in distinguishing true progression from pseudo-progression. While these applications have shown promise, variability in imaging protocols, feature extraction methods, and data acquisition remain significant barriers to the clinical implementation and reliability of these models [
22,
23,
24,
25]. Efforts to standardize feature extraction workflows, including initiatives like the Imaging Biomarker Standardization Initiative, are helping to reduce variability and improve reproducibility [
26].
In this study, we analyzed the impact of two key factors in dose calculation—dose grid size and CT density curve—to investigate potential domain dependency in dosimetric data. Specifically, we examined how variations in treatment planning software or configuration influence dose calculations in radiotherapy treatment plans. Understanding these variations is essential for ensuring the reliability of predictive models that utilize dosimetric data for treatment outcome assessment. In a head and neck patient cohort consisting of 19 individuals, we compared the dose differences between two different treatment planning systems to simulate the multi-institutional environment. Specifically, we adopted the collapsed cone convolution method to calculate dose distributions to minimize external effects in the analysis of domain dependency. We used two dose grid sizes in the dose calculations and compared the dose differences using various DVH parameters. The ranges of dose differences for the ROIs were similar between 1 mm and 3 mm grid sizes and indicated that the dose grid size can affect the dose calculation. However, the size of the dose grid had only a small influence on the domain dependency when the dose calculation was performed through different treatment planning systems. The comparison of dose differences in a single treatment planning system indicated that the variance of the dose grid size had a larger impact on RayStation than on Pinnacle. Furthermore, the analysis of dose differences with different CT density conversion curves indicated that domain discrepancy may occur in RayStation when variant CT curves are applied.
4. Conclusions
In conclusion, the observed dose differences between treatment planning systems, as demonstrated in our study, have the potential to impact treatment response and the subsequent development of decision-making models. While the maximum dose difference may occur only in rare cases and be patient-dependent, it should not be disregarded as it can have practical implications.
To mitigate the impact of these discrepancies, it is essential to adhere to reporting guidelines such as the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) [
27]. Explicitly reporting the data collection tools and processes according to TRIPOD guidelines can establish a strong link between machine learning and clinical communities, facilitating a better understanding and interpretation of model performance. Explicitly reporting the data collection tools and processes according to TRIPOD guidelines can improve transparency and facilitate better understanding and interpretation of model performance. Moreover, adopting best practices from classical outcome prediction modeling—including prospectively registered study protocols, data analysis plans [
28,
29], and the publication of full models and code for independent validation—is crucial for ensuring transparency, reproducibility, and reliability in predictive modeling. These measures, previously highlighted in the QUANTEC papers, remain highly relevant in addressing challenges related to treatment planning and outcome prediction [
10,
30].
Additionally, considering the parameters involved in treatment planning—such as dose calculation algorithms, optimization techniques, planners, and contouring methods—is crucial. The success of the optimization process depends on the cost function used by the algorithm, the structures defined by the user, and the algorithm employed for minimization. The situation can be further exacerbated if different treatment planning parameters are used for the same patient, resulting in notable differences in treatment output even with the same prescribed dose. Therefore, ensuring consistency and standardization in these planning parameters is vital to achieving more reliable and comparable treatment outcome predictions.
By acknowledging the potential impact of dose differences, adhering to reporting guidelines, and considering the influence of treatment planning parameters, we can enhance treatment response assessments and improve the reliability of predictive models in radiation oncology. These steps will contribute to advancing the field and improving the integration of predictive modeling into clinical practice.
Future AI-driven studies, such as domain adaptation and multi-institutional data harmonization, may help address the impact of treatment planning variations on model performance. Investigating these techniques could provide further insights into improving generalizability and clinical applicability in outcome prediction models.
Author Contributions
Conceptualization, R.R. and M.S.; Methodology, R.R., S.P. and L.C.F.; Validation, S.P.; Formal analysis, R.R., S.P., L.C.F. and D.L.; Investigation, R.R.; Resources, M.S.; Data curation, D.L.; Writing—original draft, R.R., S.P. and D.L.; Writing—review and editing, R.R., L.C.F. and M.S.; Supervision, M.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of MD Anderson Cancer Center (protocol code RCR03-0800 and 17 January 2018).
Informed Consent Statement
Due to the retrospective nature of the research, informed consent was not required.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Acknowledgments
The author(s) acknowledge the support of the High-Performance Computing Facility at the University of Texas MD Anderson Cancer Center for providing computational resources that contributed to the research results reported in this paper. The author(s) also thank Dawn Chalaire from the English Editing Services at the University of Texas MD Anderson Cancer Center for her assistance.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Aerts, H.J.W.L.; Velazquez, E.R.; Leijenaar, R.T.H.; Parmar, C.; Grossmann, P.; Carvalho, S.; Bussink, J.; Monshouwer, R.; Haibe-Kains, B.; Rietveld, D.; et al. Decoding tumor phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 2014, 5, 4006. [Google Scholar] [CrossRef] [PubMed]
- Parmar, C.; Grossmann, P.; Rietveld, D.; Rietbergen, M.M.; Lambin, P.; Aerts, H.J.W.L. Radiomic machine-learning classifiers for prognostic biomarkers of head and neck cancer. Front. Oncol. 2015, 5, 272. [Google Scholar] [CrossRef] [PubMed]
- Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images are more than pictures, they are data. Radiology 2016, 278, 563–577. [Google Scholar] [PubMed]
- McDermott, M.B.A.; Wang, S.; Marinsek, N.; Ranganath, R.; Foschini, L.; Ghassemi, M. Reproducibility in machine learning for health research: Still a ways to go. Sci. Transl. Med. 2021, 13, eabb1655. [Google Scholar] [CrossRef]
- Yang, J.; Soltan, A.A.S.; Clifton, D.A. Machine Learning Generalizability Across Healthcare Settings: Insights from multi-site COVID-19 screening. NPJ Digit. Med. 2022, 5, 69. [Google Scholar] [CrossRef]
- Nestor, B.; McDermott, M.B.A.; Boag, W.; Berner, G.; Naumann, T.; Hughes, M.C.; Goldenberg, A.; Ghassemi, M. Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks. In Proceedings of the 4th Machine Learning for Healthcare Conference, PMLR, Ann Arbor, MI, USA, 9–10 August 2019; Doshi-Velez, F., Fackler, J., Jung, K., Eds.; Machine Learning for Healthcare: Ann Arbor, MI, USA, 2019; Volume 106. [Google Scholar]
- Caruana, R.; Lou, Y.; Gehrke, J.; Koch, P.; Sturm, M.; Elhadad, N. Intelligible Models for HealthCare. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015. [Google Scholar] [CrossRef]
- Reiazi, R.; Abbas, E.; Famiyeh, P.; Rezaie, A.; Kwan, J.Y.; Patel, T.; Bratman, S.V.; Tadic, T.; Liu, F.-F.; Haibe-Kains, B. The impact of the variation of imaging parameters on the robustness of Computed Tomography Radiomic features: A review. Comput. Biol. Med. 2021, 133, 104400. [Google Scholar]
- Reiazi, R.; Arrowsmith, C.; Welch, M.; Abbas-Aghababazadeh, F.; Eeles, C.; Tadic, T.; Hope, A.J.; Bratman, S.V.; Haibe-Kains, B. Prediction of Human Papillomavirus (HPV) Association of Oropharyngeal Cancer (OPC) Using Radiomics: The Impact of the Variation of CT Scanner. Cancers 2021, 13, 2269. [Google Scholar] [CrossRef]
- Appelt, A.; Elhaminia, B.; Gooya, A.; Gilbert, A.; Nix, M. Deep learning for radiotherapy outcome prediction using dose data—A review. Clin Oncol. 2021, 34, E87–E96. [Google Scholar]
- Ma, C.M.C.; Chetty, I.J.; Deng, J.; Faddegon, B.; Jiang, S.B.; Li, J.; Seuntjens, J.; Siebers, J.V.; Traneus, E. Beam modeling and beam model commissioning for Monte Carlo dose calculation-based radiation therapy treatment planning: Report of AAPM Task Group 157. Med. Phys. 2019, 47, E1–E18. [Google Scholar] [CrossRef]
- Chen, W.-Z.; Xiao, Y.; Li, J. Impact of dose calculation algorithm on radiation therapy. World J. Radiol. 2014, 6, 874–880. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Mammadov, O.; Akkurt, B.H.; Musigmann, M.; Ari, A.P.; Blömer, D.A.; Kasap, D.N.; Henssen, D.J.; Nacul, N.G.; Sartoretti, E.; Sartoretti, T.; et al. Radiomics for Pseudoprogression Prediction in High-Grade Gliomas: Added Value of MR Contrast Agent. Heliyon 2022, 8, e10023. [Google Scholar] [CrossRef] [PubMed]
- Alizadeh, M.; Lomer, N.B.; Azami, M.; Khalafi, M.; Shobeiri, P.; Bafrani, M.A.; Sotoudeh, H. Radiomics: The New Promise for Differentiating Progression, Recurrence, Pseudoprogression, and Radionecrosis in Glioma and Glioblastoma Multiforme. Cancers 2023, 15, 4429. [Google Scholar] [CrossRef] [PubMed]
- Choi, Y.J.; Kim, H.S.; Jahng, G.H.; Kim, S.J. Pseudoprogression in patients with glioblastoma: Assessment with contrast-enhanced dynamic and diffusion-weighted MR imaging. Radiology 2013, 259, 831–840. [Google Scholar]
- Giraud, P.; Giraud, P.; Gasnier, A.; El Ayachy, R.; Kreps, S.; Foy, J.-P.; Durdux, C.; Huguet, F.; Burgun, A.; Bibault, J.-E. Radiomics and Machine Learning for Radiotherapy in Head and Neck Cancers. Front. Oncol. 2019, 9, 174. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Lee, T.-F.; Chang, C.-H.; Chi, C.-H.; Liu, Y.-H.; Shao, J.-C.; Hsieh, Y.-W.; Yang, P.-Y.; Tseng, C.-D.; Chiu, C.-L.; Hu, Y.-C.; et al. Utilizing radiomics and dosiomics with AI for precision prediction of radiation dermatitis in breast cancer patients. BMC Cancer 2024, 24, 965. [Google Scholar] [CrossRef]
- Huang, Q.; Yang, C.; Pang, J.; Zeng, B.; Yang, P.; Zhou, R.; Wu, H.; Shen, L.; Zhang, R.; Lou, F.; et al. CT-Based Dosiomics and Radiomics Model Predicts Radiation-Induced Lymphopenia in Nasopharyngeal Carcinoma Patients. Front. Oncol. 2023, 13, 1168995. [Google Scholar] [CrossRef]
- Chopra, N.; Dou, T.; Sharp, G.; Sajo, E.; Mak, R. A Combined Radiomics-Dosiomics Machine Learning Approach Improves Prediction of Radiation Pneumonitis Compared to DVH Data in Lung Cancer Patients. Int. J. Radiat. Oncol. 2020, 108, e777. [Google Scholar] [CrossRef]
- Wang, J.; Shen, L.; Zhong, H.; Zhou, Z.; Hu, P.; Gan, J.; Luo, R.; Hu, W.; Zhang, Z. Radiomics features on radiotherapy treatment planning CT can predict patient survival in locally advanced rectal cancer patients. Sci. Rep. 2019, 9, 15346. [Google Scholar] [CrossRef]
- Zhang, Q.; Wang, K.; Zhou, Z.; Qin, G.; Wang, L.; Li, P.; Sher, D.; Jiang, S.; Wang, J. Predicting Local Persistence/Recurrence after Radiation Therapy for Head and Neck Cancer from PET/CT Using a Multi-Objective, Multi-Classifier Radiomics Model. Front. Oncol. 2022, 12, 955712. [Google Scholar] [CrossRef]
- Kumar, V.; Gu, Y.; Basu, S.; Berglund, A.; Eschrich, S.A.; Schabath, M.B.; Forster, K.; Aerts, H.J.W.L.; Dekker, A.; Fenstermacher, D.; et al. Radiomics: The process and the challenges. Magn. Reson. Imaging 2012, 30, 1234–1248. [Google Scholar] [CrossRef]
- Gargiulo, G. Next-Generation in vivo Modeling of Human Cancers. Front. Oncol. 2018, 8, 429. [Google Scholar] [CrossRef]
- Lambin, P.; Rios-Velazquez, E.; Leijenaar, R.; Carvalho, S.; van Stiphout, R.G.P.M.; Granton, P.; Zegers, C.M.L.; Gillies, R.; Boellard, R.; Dekker, A.; et al. Radiomics: Extracting More Information from Medical Images Using Advanced Feature Analysis. Eur. J. Cancer 2012, 48, 441–446. [Google Scholar] [CrossRef] [PubMed]
- Rizzo, S.; Botta, F.; Raimondi, S.; Origgi, D.; Fanciullo, C.; Morganti, A.G.; Bellomi, M. Radiomics: The Facts and the Challenges of Image Analysis. Eur. Radiol. Exp. 2018, 2, 36. [Google Scholar] [CrossRef] [PubMed]
- Zwanenburg, A.; Vallières, M.; Abdalah, M.A.; Aerts, H.J.W.L.; Andrearczyk, V.; Apte, A.; Ashrafinia, S.; Bakas, S.; Beukinga, R.J.; Boellaard, R.; et al. The Image Biomarker Standardization Initiative: Standardized Quantitative Radiomics for High-Throughput Image-Based Phenotyping. Radiology 2020, 295, 328–338. [Google Scholar] [CrossRef]
- Collins, G.S.; Reitsma, J.B.; Altman, D.G.; Moons, K.G. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement. BMC Med. 2015, 13, 1. [Google Scholar] [CrossRef]
- Moons, K.G.M.; de Groot, J.A.H.; Bouwmeester, W.; Vergouwe, Y.; Mallett, S.; Altman, D.G.; Reitsma, J.B.; Collins, G.S. Critical appraisal and data extraction for systematic reviews of prediction modelling studies: The CHARMS checklist. PLoS Med. 2014, 11, e1001744. [Google Scholar]
- Debray, T.P.A.; Collins, G.S.; Riley, R.D.; I E Snell, K.; Van Calster, B.; Reitsma, J.B.; Moons, K.G.M. Transparent reporting of multivariable prediction models developed or validated using clustered data: TRIPOD-Cluster checklist. BMJ 2023, 380, e071018. [Google Scholar] [CrossRef]
- Bentzen, S.M.; Constine, L.S.; Deasy, J.O.; Eisbruch, A.; Jackson, A.; Marks, L.B.; Haken, R.K.T.; Yorke, E.D. Quantitative Analyses of Normal Tissue Effects in the Clinic (QUANTEC): An Introduction to the Scientific Issues. Int. J. Radiat. Oncol. Biol. Phys. 2010, 76, S3–S9. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).