1. Introduction
The concern for viruses’ spread has been in the researchers’ spotlight for many years [
1,
2,
3,
4]. Particularly, in the last year and a half, due to the COVID-19 pandemic, this concern has become a hot topic in many research fields [
5,
6,
7,
8,
9,
10,
11,
12,
13]. Many models exist to study the spread of viruses. The first categorization between these models can be made for deterministic and stochastic models [
14,
15,
16,
17].
Deterministic models are the simplest, with fixed input variables. They are also known as compartmental models because the individuals in the population are assigned to different subgroups, or compartments, each of which represents a specific condition of the individual in the epidemic situation [
18]. Derivatives in time are used to express the transition rates of individuals from a compartment to another. Thus, the model is constructed as a system of Ordinary Differential Equations (ODEs).
Stochastic models take into account variations in input variables and provide results in terms of probability. Unlike a deterministic one, a stochastic model allows random variations in one or more inputs over time. Therefore, an estimation of the probability distributions of the outcomes can be carried out. Specifically, the variables changing in time can be the exposure risk, recovery rate, and other disease dynamics. Being able to insert the variability of the input data, the stochastic models have a more complex structure than the deterministic ones but manage to be more adherent to reality [
19]. A second categorization between the models can be made by taking into account (or not taking into account) the vital dynamics. The vital dynamics represent the demography dynamics, in which the naturally occurring births and deaths are included [
20].
In this paper, deterministic models with vital dynamics are studied. Precisely, the Susceptible-Infectious-Recovered (SIR), Susceptible-Exposed-Infectious-Recovered (SEIR), and Susceptible-Exposed-Infectious-Recovered-Susceptible (SEIR) models are considered, with the vaccination factor for one of those [
21,
22].
This work aims to estimate various epidemiological model parameters by using the newly developed framework called
Extreme Theory of Functional Connections (X-TFC) [
23], which merges the Physics-Informed Neural Networks (PINNs) method, introduced by Raissi et al. [
24], and the Theory of Functional Connections (TFC), proposed by Mortari [
25]. This method aims to solve forward problems and inverse problems (
data-driven parameters discovery) involving DEs in different perturbation scenarios. A typical field where solving inverse problems is of interest is remote sensing [
26,
27,
28,
29]. For instance, in Reference [
30], the authors combine radiative and heat transfer equations to create a set of parametric DEs. The solutions of this system of equations are compared with real data to retrieve the grain size and the thermal inertia of planetary regoliths, which are the parameters governing the physics of the problem. There are two main approaches to solving mathematical and physical inverse problems: deterministic and probabilistic. The deterministic approach tackles inverse problems using standard optimization techniques. According to these techniques, a set of optimal parameters is found, which minimizes the difference between simulated and real data. However, inverse problems are known to be ill-posed [
31] and hence, it becomes challenging to determine the uncertainty in the retrieved quantities mainly due to the noise in the observed data and the uncertainty in the real values of the input parameter that are not tuned.
As stated in [
32], inverse problems to parameters’ estimation are in general ill-posed because the problem is non-unique due to the higher number of unknowns than data/measurements and the stability of the solution to noise in the data and modeling errors is generally not guaranteed. Standard optimization techniques consider the tuning quantities to be deterministic. Therefore, the inverse problems’ outputs are fixed quantities. However, these quantities are affected by uncertainties that need to be estimated. The issue is that uncertainty quantification (usually done via regularization techniques) is not trivial to perform, and it can lead to poor results, mainly when the problem is ill-posed. Moreover, nonlinear or non-convex inverse problems have more local minimum solutions. Thus, more than one acceptable solution can be computed, and it becomes challenging to select the best one via the classical optimization framework [
33].
To overcome this issue, the probabilistic approach can be used, in particular Bayesian inversion techniques. In the Bayesian inversion framework, the quantities to be estimated are considered random variables. Thus, the output of the inverse modeling is the probability distribution for each of those parameters. Therefore, with the probabilistic approach, the degree of uncertainty of the quantities’ values to be retrieved is included in their probability distributions [
26].
Nevertheless, in this work, we tackle the inverse problem for data-driven parameters’ discovery of epidemiological models via a deterministic approach. We show that solving these problems via Physics-Informed Neural Network (PINN) methods, such as X-TFC, mitigates the ill-posedness of the inverse problems toward modeling errors and noisy data. This is possible because the physics of the problem, modeled via a DE, acts as a regulator during the search of the optimal parameters (i.e., the NN training). That is, the network training is carried in a data-physics-driven fashion.
This manuscript is organized as follows. In
Section 2, PINNs are introduced with particular regard to the X-TFC framework. In
Section 3, the application of the X-TFC for the data-driven discovery of the parameters governing a few of the most common epidemiological compartmental models is presented. In
Section 3, the results are presented and discussed. Finally, the concluding remarks are given in the last section.
2. Physics-Informed Neural Network and Functional Interpolation
PINNs are machine learning methods that include physics into a data-driven functional representation of input–output pairing collections. As defined by Raissi et al. [
24], the term PINN describes NNs that embed the physics as a regularization term in the loss function. For instance, suppose that one aims to do a regression of an experimental dataset employing an NN and that the collected data represents some physical events modeled via a set of DEs. In conventional regression, one would approximate the data utilizing a NN trained to minimize a Mean Squared Error (MSE) as a loss function. Nevertheless, there is no guarantee that the physics phenomena governing the dataset would not be violated. To mitigate this issue, PINNs are introduced to ensure that the physics, modeled via DEs, is added as a penalty to the loss function. This extra term serves as a regularizator that penalizes the training when the DE and its constraints (e.g., Boundary Conditions BCs, and eventually Initial Conditions ICs) are violated. Thus, one can guarantee that the physics underlying the process is not violated. This method is defined as a data-physics-driven solution of DEs. More specifically, from the physics perspective, this approach enables the training of NNs to learn the solution of DEs in a data-physics-driven fashion. This becomes critical if the DEs do not precisely model the physics of the problem, for example, when uncertain dynamical systems are considered and/or when perturbations are nonnegligible. Conversely, when the purpose is to retrieve parameters governing some physical phenomena modeled via DEs (e.g., single scattering albedo in the radiative transfer equation), one usually refers to data-physics-driven parameters discovery of DEs (i.e., inverse problems) [
24]. When data is not available, and consequently the loss function contains the residual of the DEs and its constraints solely, PINNs learn the solutions of problems involving DEs only in a physics-driven fashion.
The major shortcoming of the standard PINNs, as presented by Raissi et al. [
24], is that the DE constraints are not analytically satisfied, and consequently, they need to be concurrently learned with the DE solution within the domain. Hence, during the PINN training, we deal with competing objectives: learning the DE hidden solution within the domain and the DE hidden solution on the boundaries. This drives to unbalanced gradients during the network training via gradient-based techniques that prompt PINNs to struggle frequently to accurately learn the underlying DE solution [
34]. Gradient-based optimization methods may get stuck in limit cycles or diverge if several competing objectives are present [
35,
36]. In [
34], to surmount this issue, the authors proposed a learning rate annealing algorithm that uses gradient statistics to adaptive assign proper weights to different terms (e.g., DE residuals within the domain and DE residuals on the boundaries) in the PINNs loss function during the training. In this work, we propose to employ a different and more robust PINN model, the Extreme Theory of Functional Connections (X-TFC), that merges NNs and the Theory of Functional Connections (TFC) [
23,
37]. X-TFC exploits the constrained expressions (CEs) introduced within the TFC to satisfy the boundary constraints analytically.
TFC, elaborated by Mortari [
25], is a mathematical framework for functional interpolation where functions are approximated using these CEs. A CE is a functional that is a sum of a free function and a functional that analytically satisfies the constraints despite the choice of the free function [
25,
38]. TFC has multiple applications. Primarily, TFC is used for the solution of DEs because the CEs eliminate the “curse of the equation constraints” [
39,
40,
41]. Moreover, TFC has already been used to solve different classes of optimal control space guidance problems such as energy optimal landing on large and small planetary bodies [
42,
43], fuel optimal landing on large planetary bodies [
44], energy optimal relative motion problems subject to Clohessy-Wiltshire dynamics [
45], and classes of transport theory problems, such as radiative transfer [
46] and rarefied-gas dynamics [
47]. For tackling DEs, the standard (or Vanilla as defined in [
48,
49]) TFC method employs a linear combination of orthogonal polynomials, such as Legendre or Chebyshev polynomials [
39,
40], as a free function. However, using orthogonal polynomials as a free function makes the standard TFC framework suffer from the curse of dimensionality, particularly when solving large-scale PDEs. To overcome this limitation, X-TFC employs a shallow NN trained via the Extreme Learning Machine (ELM) algorithm [
50] to represent the free function.
Being a PINN method, X-TFC can solve forward and inverse problems involving parametric DEs with high precision and low computational time. The method for solving direct problems involving parametric DEs is introduced and presented by Schiassi et al. [
23]. As previously stated, the focus of this work is to apply the X-TFC for data-driven parameters discovery of compartmental epidemiological models such as SIR, SEIR, and SEIRS. In the remainder of this section, we will explain how the X-TFC is applied to tackle these problems. Such models are systems of ODEs, where the constraints are on the initial values of these systems’ solutions. That is, these problems are initial value problems (IVPs). Therefore, in this section, we will also present the step-by-step derivation of the constrained expressions for these problems.
2.1. Generality on Neural Networks
Neural Networks (NNs) are one of the key components of the X-TFC framework that will be used to tackle the problems considered in this work. Therefore, for the convenience of the reader, before diving into the detailed explanation on how X-TFC works, we will give some generalities about NNs.
NNs are powerful mathematical tools, inspired by the biological neurosystems of the human brain, originally developed as function approximators for machine learning applications [
51,
52,
53].
NNs are made by artificial neurons and their mutual connections. The output of every neuron is a non-linear function of the weighted sum of its inputs. The neurons are typically arranged into layers, whose number defines the type of NN. NNs with only a single layer of neurons are known as single-layer or shallow NNs. NNs with more than one layer of neurons are called Deep NNs (DNNs). Neurons and layers are not necessarily all connected among them. When all neurons and layers are connected, we generally talk about fully connected NNs (shallow or deep depending on the number of layers). Every layer of a fully connected NN can be mathematically represented as follows
where
W is the weight matrix,
and
are the input and output vectors, respectively,
is the bias vector, and
is the activation function, which can be either different or the same for every layer.
As previously stated, NNs were originally introduced as function approximators thanks to their ability in interpolation and fitting. An NN function approximator works in a supervised manner, where the training set
consists in
input points and
output points (which can be affected or not by noise). First, a trial function
is randomly initialized and the loss function can be defined as
The training process consists in solving an optimization problem, where the loss function (
) is the objective function that needs to be minimized, and the decision variables are the weights and biases of each layer. Usually, the training is performed via stochastic gradient based methods such as Adam Optimizer [
54].
2.2. Extreme Theory of Functional Connections (X-TFC)
In this work, we will focus on systems of ODEs (SODEs) used to describe epidemiological compartmental models. In general, we can express ODEs, in their implicit form as,
subject to constraints given by initial conditions (IC) and/or boundary conditions (BC). In Equation (
3),
is the unknown (or latent) solution, with
,
are the parameters governing the ODE (In general, even if it is not reported in the notation,
f is a function of
x, and it is parametrized by
, that in general can be
x dependent as well.),
is a linear or non-linear operator acting on
f and parametrized by
,
is the modeling error that is negligible when solving problems where the physics is exactly modeled by the underlying DE, and
is a known term that in general can be
x dependent and parametrized by
as well.
The first step in our PINN-TFC based framework is to approximate the latent solution
f with a constrained expression, defined within the TFC [
25],
where
analytically satisfies the DE constraints, and
projects the free function
, which is a real valued function, onto the space of functions that vanish at the constraints [
37]. In the X-TFC method we chose the free function,
, to be a shallow NN, trained via ELM algorithm [
50]. That is,
where
L is the number of hidden neurons,
is the input weights vector connecting the
jth hidden neuron and the input nodes,
with
is the
jth output weight connecting the
jth hidden neuron and the output node, and
is the bias of the
jth hidden neuron,
are activation functions, and
. According to the ELM algorithm [
50], biases and input weights are randomly selected and not tuned during the training, thus they are known hyperparameters. The activation functions,
, are also known, as they are user selected. Thus, the only unknown NN hyperparameters to compute are the output weights
. Hence we can write,
The step-by-step process to derive the constrained expression is provided in
Section 2.2.1. Once
f is approximated with a NN, the second step of the X-TFC method is to define the loss functions,
where
are the real data that eventually can be perturbed. Once the losses are defined, we need to define the vectors with all the unknowns, that are the
coefficients and the parameters governing the equations
,
Now, by combining the losses, an augmented loss function vector is formed as follows,
and enforcing that for a true solution, this vector should be equal to 0. This allows the unknowns to be solved via different optimization schemes, e.g., least-square for linear problems [
39] and iterative least-squares for non-linear problems [
40]. When solving inverse problems for parameter estimation, the iterative least-square method is required. Thus, the estimation of the unknowns is updated at each iteration as follows,
where the
k subscript refers to the current iteration. In general, the
term can be defined by performing classic linear least-square at each iteration of the iterative least-square procedure as follows,
where
is the Jacobian matrix containing the derivatives of the losses with respect to all the unknowns. One can consider computing the Jacobian either by hand or by means of computing tools, such as Symbolic or Automatic Differentiation Toolboxes. The iterative process is repeated until either of the following conditions are met,
where
defines some user prescribed tolerance.
In
Figure 1, a schematic that summarizes how the X-TFC algorithm works for solving inverse problems is shown. The main steps are also reported here:
Approximate the latent solution(s) with the CE;
Analytically satisfy the ICs/BCs;
Expand with the single layer NN (trained via ELM);
Substitute into the DE (that can be also a system of DEs);
Build the DE losses (that drive the training of the network, informing it with the physics of the problem);
Build the data losses (the data can be provided on the solutions and/or on their derivatives);
Train the network;
Build the approximate solution (with the estimated optimal parameters).
2.2.1. Constrained Expression Derivation
Since this paper focuses on IVPs, for the convenience of the reader, we will present the step-by-step derivation of the constrained expression for these kinds of problems. The interested reader can find the general derivation for a
dimensional constrained expression either in [
37] or [
23]. Given a parametric ODE where we have a constraint on the initial value of the solution (i.e.,
), the constrained expression for
f is the following [
25],
By imposing the constraint
into Equation (
12) we get the following,
Now by plugging this results back into Equation (
12) we get,
where
is called switching function. For an IVP with one constraint on
f, we have
.
In general
f is defined in
, which can be inconsistent with the domain where the activation function are defined. Thus we need to map it into the
domain as follows,
where the mapping coefficient
c is,
According to the chain rule of the derivative we then have,
3. Epidemiological Models Formulation
In this section, the X-TFC formulation for the data-driven parameters discovery of a series of epidemiological compartmental models is explained in detail. The presented models are the SIR [
55], SEIR [
56], and SEIRS [
57], taking into account the vital dynamics and the vaccination (for the SEIR model) [
58]. As already mentioned, the goal is to estimate the parameters of our interest through solving inverse problems via a deterministic approach.
Given fixed parameters, by integration, we solve the systems of ODEs to create a synthetic dataset (with and without noise), through which the parameters that govern the physics of the problem can be retrieved. After building the constrained expressions and the loss functions, the Jacobian matrix (the matrix containing the derivatives of the losses with respect to the unknowns) is computed in order to perform the iterative least-squares and estimate the unknowns.
3.1. SIR Model
As first problem, we consider the system of differential equations that govern the classic deterministic
SIR (
Susceptible-
Infectious-
Recovered) compartmental model, in which individuals in the recovered state gain total immunity to the pathogen, with vital dynamics to take into account the births (that can provide an increase in susceptible individuals) and natural death rates. The DEs governing the SIR model are the following,
where
is the total population,
is the birth and natural death rate (considered equal to maintain a constant population),
is the infectious rate, and
is the recovery rate. An important parameter to consider is the basic reproduction number
, which represents the ratio between
and
. If
, an outbreak is going to occur.
According to the TFC framework, the latent solutions are approximated with the constrained expressions. That is,
The first three loss functions we present take into account the regression over the data. The last three losses drive the training of the NN, informing it with the physics governing the problem. The Loss functions are reported below,
To construct the Jacobian matrix
we need to compute the derivatives of the losses with respect to the
to compute the approximate solutions of the state variables, whereas the other derivatives are essential to estimate the parameters (in this case
and
) appearing in the system of Equation (
18). The resultant Jacobian matrix has the following form,
3.2. SEIR Model
The second problem that we aim to solve is the
SEIR (
Susceptible-
Exposed-
Infectious-
Recovered) compartmental model. This model, compared to the previous one, takes into account the incubation period of a virus, i.e., the time in which a subject comes into contact with the virus but still does not develop its symptoms. Therefore, the subject is infected but is not yet considered among the infectious. In addition, a vaccination parameter which moves people from the Susceptible to Recovered directly is added. The following is the ODEs system describing the model,
where
is the total population,
is the birth and natural death rate (considered equal to maintain a constant population),
is the vaccination rate,
is the infectious rate,
is the rate at which an Exposed person becomes Infectious, and
is the recovery rate.
According to the TFC framework, the latent solutions are approximated with the constrained expressions. That is,
The first four loss functions we present take into account the regression over the data. The last four losses drive the NN, informing it with the physics governing the problem. The loss functions are reported below,
To construct the Jacobian matrix
we need to compute the derivatives of the losses in respect of the
to compute the approximate solutions of the state variables, whereas the other derivatives are essential to estimate the parameters (in this case
,
, and
) appearing in the system of Equation (
18). The resultant Jacobian matrix has the following form,
3.3. SEIRS Model
The last problem we present here, is the
SEIRS (
Susceptible-
Exposed-
Infectious-
Recovered-
Susceptible) compartmental model. This model is used in the case when the immunity of recovered individuals wane, and they return to exist in the category of Susceptibles. No vaccination is considered here. This model is governed by the following system of ODEs:
where
is the total population,
is the natural deaths rate,
is the new births rate,
is the infectious rate,
is the rate at which an Exposed person becomes Infectious,
is the rate which Recovered individuals return to the Susceptible statue due to loss of immunity, and
is the recovery rate.
According to the TFC framework, the latent solutions are approximated with the constrained expressions. That is,
The first four loss functions we present take into account the regression over the data. The last four losses drive the NN informing it with the physics governing the problem. The loss functions are reported below,
To construct the Jacobian matrix
we need to compute the derivatives of the losses in respect of the
to compute the approximate solutions of the state variables, whereas the other derivatives are essential to estimates the parameters (in this case
,
, and
) appearing in the system of Equation (
18). The resultant Jacobian matrix has the following form:
4. Results and Discussion
To test the ability of the X-TFC in performing data-driven parameters discovery of epidemiological compartmental models, we have created synthetic datasets according to the three models presented above (SIR, SEIR, and SEIRS). In particular, for each model, a no-noisy synthetic dataset (here called original dataset
) has been generated by simply propagating the dynamics equations of the model using the MatLab function ODE113. In addition, to simulate a more realistic example, perturbed synthetic datasets (
) have been created by adding noise to the original dataset. That is,
where
is the perturbation coefficient (equal to 0 for the original dataset) and
represents a uniform distribution. The real values of the parameters governing the synthetic dataset are known, so that the accuracy of the results is measured by the absolute error between the real and estimated values of the parameters.
Additionally, the X-TFC method involves several hyperparameters that can be modified to obtain accurate solutions. These hyperparameters are the number of training points,
n, the number of neurons,
L, the type of activation function, and the probability distribution where input weights and bias are sampled from. Therefore, a sensitivity analysis has been performed to study the behavior of the X-TFC method as these hyperparameters vary. The sensitivity analysis is only shown for the SIR model with no-noisy data, as a similar behavior has been encountered for all the other models considered. First of all, the sensitivity analysis has demonstrated that, for the models analyzed, the solution accuracy is not as sensitive to the type of activation function used or to the probability distribution used to sample the inputs weights and bias as it is to the number of training points and the number of neurons, confirming the results found from the sensitivity analysis reported in [
23]. Hence, the two parameters that strongly influence the performances of the X-TFC are
n and
L.
Figure 2a,b refer to the analysis with the original dataset (
). As illustrated, high values of
L (
), with a fixed
n, do not lead to an improvement of the accuracy, since
Figure 2a presents an asymptotic-like behavior. The same considerations are valid varying
n and keeping fixed
L (
Figure 2b). Indeed, the solution does not significantly improve increasing the number of discretization points. This result is also obtained if a perturbed dataset is considered (see
Figure 2d). On the other hand,
Figure 2c shows an interesting behavior. The accuracy of the solution gets worse by increasing the number of neurons
L. This trend is probably due to the fact that X-TFC tries to overfit the perturbed data so that to diverge too much from the real curves and thus obtaining an inaccurate estimation of the parameters. The rest of this section focuses on the results obtained for each model presented previously. For these problems, the
ArcTan activation function and a uniform random distribution ranging within [−10,10] are employed for the ELM.
All the models tackled in this manuscript have been coded in MATLAB R2020a and ran with an Intel Core i7 - 9700 CPU PC with 64 GB of RAM.
4.1. SIR Model
Here, the results and the performances for SIR problem are shown. The outputs are obtained by setting the following parameters:
Natural mortality rate: (set equal to the birth rate, to simulate a constant number of population);
Effective contact rate (possibility to be infected): ;
Removal rate (how often infected people become recovered): ;
Initial conditions: ; ; ;
Analysis time: 15 days.
Several simulations are carried out by varying the intensity of the noise, and the outputs are reported in
Table 1. While we could find the exact values of parameters with the original dataset, a slight deviation of these values occurs by increasing the perturbation coefficient
. However, the absolute errors on the parameters result to have at least two digits of accuracy.
Figure 3a,b report the perturbed and real dataset and the solution of the problem for the case of
, respectively. As it can be seen, the X-TFC is able to obtain an accurate solution avoiding the overfitting on the data, as it could be expected by a simple regression on the perturbed dataset. This is due to the information about the physics of the problem, which acts as a regulator, that are embedded in the physics-informed training framework. The accuracy of the inversion with perturbed dataset is also proved by the constant value of the population
N, as it has to be from the theory.
4.2. SEIR Model
Here, the results and the performances for SEIR problem are shown. The outputs were obtained by setting the following parameters:
Natural mortality rate (set equal to the birth rate, to simulate a constant number of population);
Vaccine rate ;
Effective contact rate (possibility to be infected) ;
Removal rate (how often infected people become recovered) ;
Progression rate from exposed to infected ;
Initial conditions: ; ; ; ;
Days = 15.
Several simulations are carried out by varying the intensity of the noise, and the outputs are reported in
Table 2. While we could find the exact values of parameters with the original dataset, a slight deviation of these values occurs by increasing the perturbation coefficient
. However, the absolute errors on the parameters result to have at least one digit of accuracy.
Figure 4a,b report the perturbed and real dataset and the solution of the problem for the case of
, respectively. Again, the X-TFC is able to obtain an accurate solution avoiding the overfitting on the data, as it could be expected by a simple regression on the perturbed dataset.
4.3. SEIRS Model
Here, the results and the performances for SEIRS problem are shown. The outputs were obtained by setting the following parameters:
Natural mortality rate (set equal to the birth rate, to simulate a constant number of population);
Effective contact rate (possibility to be infected) ;
Removal rate (how often infected people become recovered) ;
Progression rate from exposed to infected ;
Rate which recovered individuals return to the susceptible statue (due to loss of immunity) ;
initial conditions: ; ; ; ;
days = 15.
Several simulations are carried out by varying the intensity of the noise, and the outputs are reported in
Table 3. While we could find the exact values of parameters with the original dataset, a slight deviation of these values occurs by increasing the perturbation coefficient
. However, the absolute errors on the parameters result in having at least two digits of accuracy.
Figure 5a,b report the perturbed and real dataset and the solution of the problem for the case of
, respectively. Again, the X-TFC is able to obtain an accurate solution avoiding the overfitting on the data, as it could be expected by a simple regression on the perturbed dataset.
5. Conclusions
In this work, the new PINN framework X-TFC has been employed to solve data driven discovery of DEs, also called inverse problems, via a deterministic approach. In particular, compartmental epidemiological models (SIR, SEIR, SEIRS) have been taken into account as test problems. The goal was to retrieve the parameters governing the dynamics equations considering unperturbed and perturbed data, to better simulate the reality. The tests have shown fairly accurate results even when a significant noise was added to the data. Furthermore, the information about the physics of the problem (considered for the training of the X-TFC) has allowed to avoid the overfitting and thus to obtain good estimations of parameters with noisy data. The low computational times obtained are extremely important to process data as soon as they are acquired, so that the results can be updated in real time. Moreover, the good estimations of parameters allow one to make predictions about the imminent future: this makes it possible to take actions in the short term (as it should be in emergency scenarios, like the COVID-19 pandemic). Future works involve the inversion of models with non-constant parameters (i.e., parameters that follow mathematical laws) as well as probabilistic parameters estimation (via Bayesian inversion) in different research fields, such as business, biology, space, and nuclear engineering.