4. Discussion
The presented multi-scale hypermodeling framework combines subcellular processes related to cell proliferation and cellular response to therapeutic agents, as well as macroscopic processes such as biomechanical interaction between healthy tissue and tumor. Two cancer types, Wilms tumor and non-small cell lung cancer, were addressed, considering chemotherapy and radiation therapy as treatment modalities. The application of this framework aimed to address different clinical questions related to tumor shrinkage after neoadjuvant therapy and tumor recurrence.
The selection of cancer types and clinical scenarios is based on their capacity to map and tackle issues related to the response to preoperative therapy or recurrence following non-surgical treatment. Roughly 7% of malignant pediatric tumors are renal tumors, with nephroblastoma or Wilms tumors (WT) accounting for approximately 90% of these cases [
85]. The WT is ideal for the construction and validation of the spatiotemporal hypermodel due to the consistent administration of chemotherapy before surgery in all pediatric patients, coupled with regular monitoring utilizing 3D imaging modalities both pre- and post-chemotherapy. However, around 10–20% of patients do not respond to pre-operative chemotherapy [
86,
87]. For these patients, primary surgery would be beneficial. Therefore, a primary clinical question that the hypermodel could answer is the following: Will a given nephroblastoma in a patient respond to pre-operative chemotherapy by tumor shrinkage, yes or no? Moreover, accurate prediction of tumor localization after chemo is relevant for surgical planning, particularly in procedures like nephron-sparing surgery. Knowledge of vascular pathways and potential adherence to other organs like liver, spleen, pancreas and colon is vital for optimizing patient outcomes and minimizing risks. Finally, multiparametric analyses [
14,
19], as well as the proof-of-concept studies presented in the present work (
Section 3.2), reveal that the tumor shrinkage after chemotherapy is not only influenced by the sensitivity of tumor cells to the drugs administrated but depends largely on the proliferation profile of the tumor. The histology and proliferation index of WT at the time of diagnosis is unknown because no biopsy takes place. An indirect way of determining them would be of paramount importance in order for the clinician to judge whether or not a particular patient would benefit from chemotherapy. miRNA pattern from serum and blood at the time of diagnosis may be used as a surrogate indicator of the actual cell type composition of the tumor and its proliferation characteristics. Machine learning can be recruited to link the in vivo proliferation estimates, as attempted in the present work, with the serum miRNA profiling of a patient. The proliferation estimates in the present work were provided based on the chemo-induced tumor shrinkage measured from medical images, considering the sensitivity profile of the patient to therapy according to the output of the molecular model.
Lung cancer ranks as the second most prevalent form of cancer globally and the primary contributor to cancer-related death [
88]. Overall, the 5-year survival rate is low, amounting to approximately 20% [
89]. The poor survival is attributed to resistance to treatment and local or distant relapses. In the case of radiotherapy, the major treatment of NSCLC, hypoxia due to disorganized vasculature, cancer stem cells and mutational status (e.g., EGRF, KRAS, etc.) are believed to be among the key factors in resistance [
90]. The management of local recurrences also remains challenging [
91,
92]. It may involve radiotherapy re-treatment using conventional or advanced techniques (i.e., intensity modulated radiation therapy, stereotactic body radiation therapy, proton beam therapy), chemotherapy, targeted therapy, or surgery, depending on the stage and previous treatment. Re-irradiation poses risks of severe toxicity for previously irradiated critical organs and consensus on the optimal re-irradiation dose is lacking [
91,
93]. Moreover, the efficacy of combining re-irradiation with systemic treatments, like chemotherapy, and the ideal delivery sequence (concurrent or sequential) remain uncertain [
93]. The hypermodel serves as a powerful tool for the analysis of the combined effect of signaling networks, particularly those implicated in treatment resistance, alongside other resistance mechanisms. Furthermore, it has the capability to address clinical questions related to the management of inoperable primary tumors or recurrences. The hypermodel plays a crucial role in evaluating tumor control probability by assessing surviving clonogens under different treatment approaches, aiding in the selection of the most suitable strategy to prolong survival. Early detection of recurrences is vital for better clinical prognosis [
94], especially in lung cases where radiological assessments alone may not adequately discern small lesions, such as those up to 3 mm in size. These limitations of imaging modalities in detecting such lesions emphasize the need to integrate approaches for accurate lesion characterization and timely differentiation between tumors and other conditions, such as infections, scars or post-treatment changes. In instances where local recurrence is anticipated, the hypermodel can assess when the tumor is expected to become clinically detectable, facilitating more effective patient follow-up.
The results are relevant to the specific cancer types. However, the methodology itself can be adapted, and, more importantly, the models can be adapted and trained to additional cancer types and clinical questions, leading to further reusability and extensibility of the overall framework.
Finally, traditional therapeutic advancements in clinical settings predominantly rely on randomized clinical trials, which aim to identify favorable treatment outcomes on average. However, patient responses to therapies often vary significantly from this average behavior. Integrated approaches like the ones presented here can be of great clinical value in determining drug effectiveness, dosage, and duration, as well as investigating the development of resistance to drugs and the effect of intra- and inter-tumoral heterogeneity. Multiscale cancer modeling holds promise in elucidating why certain treatments fail while others effectively control tumor progression, as well as why a specific therapy is effective only in a subset of patients. Eventually, by training the model using individual patient data, a more precise depiction of disease progression kinetics can be attained.
At the molecular level, the p53-mediated signaling pathways are particularly important in determining tumor cell response to DNA damage chemotherapeutic drugs like doxorubicin and vincristine as well as radiation therapy [
95]. In the present work, we presented an integrated molecular model to model key cell signaling pathways operating at different time scales—a well-recognized challenge in the field. We model the p53-mediated DNA damage-response pathway, and we refine its predictions by running a model of the ErbB receptor-mediated Ras-MAPK and PI3K/AKT pathways. Information is passed across the identified interfaces in both directions. In order to consider the effect of patient-specific molecular profiling, we have also incorporated the miRNA expression and various mutation data to renormalize the initial expression levels of corresponding mRNAs to a particular patient. In doing so, we have also taken into account the heterogeneity of the microenvironment and have adopted an ensemble of models approach by averaging over multiple conditions of receptor expression, growth factor availability, and the nature of the memory coupling signaling and transcriptional modules. The aim is to provide a mechanistic foundation to the more empirical models to obtain patient-specific cell kill rates under particular dosage conditions.
The obtained cell kill rate was directly incorporated as an input to the Oncosimulator. The Oncosimulator serves as an integrator, effectively bridging scales and facilitating the “exposure” of molecular mechanisms to the scale where the outcome is formulated. The Oncosimulator is built based on the cancer stem cell hypothesis and accounts for tumor repopulation during and after treatment assuming different tumor proliferation dynamics and varying degrees of adaptation to nutrient-deprived conditions. As the tumor grows well-vascularized regions providing sufficient nutrients to cancer cells can coexist with nutrient-limited regions within the tumor mass. In this work, glucose is assumed to be the only limiting resource, although oxygen can also be incorporated as well as glutamine. The metabolic component models the dependence of glucose uptake on glucose concentration, using Michaelis–Menten kinetics at the genome scale. The model encapsulates the metabolic adaptations exhibited by highly proliferating human cancer cells and provides information to the Oncosimulator regarding the cell proliferation rate given the available glucose. The model is developed by utilizing Recon1, the first human Genome-Scale Metabolic Model (GSMM) and constraining certain metabolic fluxes in a simplified manner. Simulations suggest that the model adequately mirrors the glycolytic phenotype, showing increased growth rates, elevated lactate production, and a decline in growth yield with escalating glucose concentrations. The predicted cellular growth rates align with the characteristics observed in the studied cancer types. Moving forward, we aim to further refine the model through advancing algorithms for generating GSMMs specific to cancer cell lines and tumors. Additionally, we plan to develop patient-specific metabolic models by conducting transcriptomic profiles of biospecimens at tissue and cellular resolution and performing in vitro experiments utilizing patient-derived cancer cells. For such in vitro experiments, it is necessary to define a more physiologically relevant environment, including nutrients such as BCAAs, fatty acids, and glucose, to better mimic human blood and the tumor metabolic microenvironment. Through these efforts, we aim to enhance the predictive capacity of our models, rendering them more reflective of the intricate patient-specific metabolic landscape of cancer.
The local concentration of glucose is described by the vasculature model. A simple vasculature model was constructed as the first to be used in the development, verification, and validation of the WT and NSCLC multi-modeler hypermodels. More detailed models, as described subsequently, can be readily incorporated into the framework if justified by available clinical data. First of all, several ‘nutrient’ fields can be considered such as oxygen and glutamine. In reality, when used to model glucose transport, oxygen availability should also be accounted for, as per [
50]. Another aspect is that the metabolic hypomodel uses an independent model of glucose consumption. In theory, the rate of glucose consumption from the metabolic model could be passed back to the vasculature component and used to update glucose concentrations. Moreover, the vasculature is assumed to be ‘static’ in the current model, in that it does not evolve in time. If justified by available data temporal evolution of the vasculature can be easily included. In addition to the tissue-scale transport model used in the clinical demonstrators, a range of more spatially resolved models of transport in tumor micro-vessels have been developed using Chaste [
26]. These models can be used to inform the hypomodel used in the present work.
The hypermodel is found to reproduce realistic tumor shapes in growth scenarios (Lung), whereas shrinkage scenarios (Nephroblastoma (WT)) tend to result in tumor shapes that have a more ‘diffuse’ appearance than those observed. The hypermodel achieved a good prediction of tumor position in the simulated cases. The following limitations may explain the observed discrepancies and could be the subject of future research. A critical issue for the Biomechanics Simulator is the uncertainty in mechanical tissue parameters (not patient-specific) and the lack of well-defined boundary conditions for mechanical computations that are particularly difficult to establish for the WT and Lung scenarios. Furthermore, the evaluation relies on image registration techniques to compare simulation results to imaging data at a later time point. This process introduces an uncertainty in the relative positioning of the tumors. To reduce the importance of this uncertainty in future studies, the use of fixed anatomical markers as reference within the respective imaging frame could be investigated. Another limitation is related to the complexity of the coupling between OS and BMS. Mapping of the pressure (direction of least-pressure) field computed by BMS into the discrete model of OS is challenging, as is the update of BMS with OS cell concentration values. Accuracy in both steps is affected by interpolation. Mesh creation from image segmentation and mapping of 3D parameter distributions between domains are commonly used in Finite Element or Finite Difference-based simulations. These “convenience functions” are crucial for functioning simulator components. We believe that each of these functionalities could be well encapsulated in a standalone hypo-model in the future. This would not only greatly facilitate the creation of new personalized FEM models and the parameter exchange between other component simulators; it would also ensure consistent handling of these critical simulation and communication aspects across the platform. Finally, morphological changes in the healthy tissue also influence tumor evolution. This aspect is not taken into account by the present hypermodels.
It is pointed out that the models proposed/developed originates from a rather macroscopic approach to tumor response to treatment as was adopted by classical radiobiology. Several steps have been made in order to go deeper and deeper into microscopic mechanisms. But due to the great complexity of cancer mechanisms up to now not all possible factors have been considered such as the microenvironment or the immune system, nevertheless this can be done in a way pretty similar to the one adopted through the use of hypomodels so far. That means that additional hypomodels each one representing a not yet addressed factor of phenomenon can be developed and linked to the core of the oncosimulator.
Before using a hypermodel in clinical settings, it needs to be clinically validated to ensure that it is accurate and reliable. It should occur at the levels of both the hypomodels and the hypermodel. Validation usually follows a 2-step approach, in which first the model has to be calibrated to a specific patient using information from an early observation point, and second, the model’s predictions about a later observation point are compared to the actual disease evolution. For example, in the case of the BMS, the first step involves the creation of a patient-specific simulation domain, while in the case of the oncosimulator, a cohort of virtual tumors is created specific to the proliferation characteristics of patients’ tumors (e.g., Ki-67, etc.). The next step involves a forward simulation of the calibrated model to a later time point and a comparison of the simulation results to the patient’s disease evolution, e.g., in terms of tumor location and shape as presented in this manuscript or tumor volume reduction. Approaches for recovering important parameters for biomechanically coupled tumor growth models from single observation points have been investigated in [
96]. Furthermore, in vitro data and experiments can serve as valuable tools for the calibration and preliminary validation of a hypomodel. For example, by conducting experiments in vitro based on patient-derived cancer cells and testing against experimental data, cancer- and patient-specific metabolic models can be built, as previously discussed.
The clinical validation of the hypermodel should initially be conducted using retrospective data from datasets distinct from those utilized for clinical adaptation and calibration. These datasets may originate from the same or other clinical studies, the latter ensuring robustness and generalizability of the model’s performance. Following sufficient preliminary clinical validation, the next step would be to conduct a prospective blinded clinical trial. This trial aims to investigate whether utilizing the hypermodel’s predictions correlates with improved treatment outcomes compared to standard approaches that do not incorporate the hypermodel. For example, a hypermodel indicates whether a preoperative chemotherapy scheme is better than primary surgery. One should look on tumor volume reduction predicted by the hypermodel. If the hypermodel predicts a reduction, chemotherapy is selected; if the hypermodel does not predict a reduction, go to primary surgery. In the standard arm, all patients will receive preoperative chemotherapy. Comparing the results between the two arms would reveal whether the model is beneficial for the outcome of a patient. When utilizing retrospective data or for prospective trials consisting of one standard arm, one could compare the reduction in tumor volume with the predicted reduction by the hypermodel as a validation means of the hypermodel. Currently, there is an ongoing validation effort concerning nephroblastoma within the context of SIOP (International Society of Paediatric Oncology) clinical protocols. A new infrastructure has been established within the University Hospital of Saarland to facilitate the collection, storage, retrieval, curation, and utilization of multiscale data generated during nephroblastoma treatment. Additionally, this infrastructure is capable of executing the multiscale mechanistic simulation models comprising the Nephroblastoma Oncosimulator directly within the hospital environment.
The potential benefits of implementing a hypermodel in a clinical setting include improved accuracy and efficiency in treatment selection and prognosis prediction, leading to better patient outcomes and enhanced quality of care. Hypermodels can enable personalized medicine by tailoring treatments to individual patient characteristics, thereby maximizing therapeutic efficacy and minimizing side effects, e.g., by avoiding ineffective pre-surgery chemotherapeutic treatments for certain patients. Moreover, hypermodels can also help healthcare providers optimize resource allocation, reduce healthcare costs, and streamline workflows by automating repetitive tasks or providing decision support.
However, integrating a hypermodel in a clinical setting presents several challenges related to the familiarization of clinical doctors with the new technologies necessary for the exploitation of the hypermodels, data privacy, security, and integrity as well as increasing computer power and memory.
Integrating the model into existing workflows without disrupting clinical operations or overwhelming healthcare providers with additional information or tasks is important [
97]. Seamless adoption and effective utilization may involve developing interfaces or APIs for data exchange between the hypermodel and the clinical workflow software [
97]. Resistance to change among healthcare professionals, skepticism about the utility of the model, and concerns about job displacement due to automation are common barriers to successful implementation. Ensuring the hypermodel’s reliability is of outmost importance for clinicians to accept it, as incorrect clinical decisions based on inaccurate predictions could potentially harm the patients. Training healthcare professionals on how to use the hypermodel is crucial for successful integration. Providing user-friendly interfaces and clear guidelines for incorporating the hypermodel’s predictions into clinical decision-making processes can facilitate adoption among clinicians. The following example can serve as a model for a generalized guide for the integration of hypermodels into existing clinical workflows. In the case of nephroblastoma, there are two initial treatment approach options. To either proceed to the neoadjuvant chemotherapeutic treatment and then proceed to the surgical excision of the tumor, or to start with the surgical excision of the tumor and administer chemotherapy afterwards. A relevant hypermodel could predict whether or not the shrinkage of the tumor due to neoadjuvant chemotherapy would be greater than a minimal clinically acceptable threshold (e.g., 30% reduction in the sum of lesion diameters based on imaging studies [
98]). In such a case neoadjuvant chemotherapy is applied. This will lead to a considerable shrinkage of the tumor and therefore a smaller surgical field in the surgery to follow. Otherwise, surgery takes place since neoadjuvant chemotherapy would not essentially shrink the tumor, whereas it will only create side effects.
Personalized simulations and virtual digital twins may potentially raise ethical implications related to data privacy, security, and integrity, as well as patient consent [
99]. A multi-faceted approach is required to protect against a possible misuse of sensitive personal data or an unauthorized or accidental modification/deletion of data, to ensure confidence in model predictions and to maintain patient trust. Patient data should be subject to pertinent legislation, including the General Data Protection Regulation (GDPR) in European Union, Health Insurance Portability and Accountability Act (HIPAA) in the United States, and applicable national laws, as well as pertinent ethical guidelines as these are specified and approved by the clinical center ethical committee. A secured IT infrastructure should be implemented including firewalls, data encryption, and authentication and authorization mechanisms, required to guarantee a secure storage of data and models. Furthermore, data should be either anonymized or pseudonymized while transmitted. The data transmission can be secured using, for example, HTTPS and DICOM web protocols. Finally, patients should be informed about the possibility that their data are used for modeling and simulation purposes and they must fully comprehend how their data will be utilized. Any such use of their data should be made possible only if the patients have provided their written informed consent unless otherwise specified by the clinical center ethical committee.
Finally, creating and upkeeping virtual digital twins in healthcare necessitate computational resources, storage capabilities, and data-processing power [
100]. The scalability issue emerges due to the vast amount of patient data involved in or produced by the model executions. Cloud-based infrastructures can offer dynamic adjustment of resources based on demand. Distributed computing architectures and data compression techniques can help optimize resource utilization. Standardized data formats can facilitate interoperability and scalability across different healthcare settings. Parallelization of model executions, e.g., concurrent execution of the virtual tumors across multiple processing units, combined with high-performance computing resources can reduce the overall execution time and can enhance scalability, particularly in scenarios where large datasets or complex algorithms are involved.