1. Introduction
Providing reliable epidemiological forecasts during an ongoing pandemic is crucial to mitigate the potentially disastrous consequences for global public health and the economy. As the ongoing pandemic of COVID-19 sadly illustrates, this is a daunting task in the case of new diseases due to the incomplete knowledge of the behavior of the disease and the heterogeneities and uncertainties in the health data count. Despite these difficulties, many forecasting strategies exist, and we can cast them into two main categories: the first type is purely data-based and involves statistical and learning methods such as time series analysis, multivariate linear regression, grey forecasting or neural networks [
1,
2,
3,
4,
5]; the second approach uses epidemiological models, which are appealing since they provide an interpretable insight of the mechanisms of the outbreak. They also provide high flexibility in the level of detail to describe the evolution of a pandemic, ranging from simple compartmental models that divide the population into a few exclusive categories to highly detailed descriptions involving numerous compartments or even agent-based models (see, e.g., [
6,
7,
8] for general references on mathematical epidemiological models and [
9,
10,
11] for some models focused on COVID-19). One salient drawback of using epidemiological models for forecasting purposes lies in the very high uncertainty in the estimation of the relevant parameters. This is due to the fact that the parameters cannot often be inferred from real observations, and the available data are insufficient or too noisy to provide any reliable estimation. The situation is aggravated by the fact that the number of parameters can quickly become large even in moderately simple compartmental models [
10]. As a result, forecasting with these models involves making numerous a priori hypotheses which can sometimes be difficult to justify by data observations.
In this paper, our goal is to forecast the time-series of infected, removed and dead patients with compartmental models that involve as few parameters as possible in order to infer these series solely from the data. The available data are only given for hospitalized people; one can nevertheless estimate the total number of infected people through an adjustment factor taken from the literature. Such a factor takes into account the proportion of asymptomatic people and infected people who do not go to hospital. The model that includes the least number of parameters is probably the susceptible–infected–removed (SIR) model [
12], which is based on a partition of the population into the following groups:
Uninfected people, called susceptible (S);
Infected and contagious people (I), with more or less marked symptoms;
People removed (R) from the infectious process, either because they were cured or unfortunately died after being infected.
If
N denotes the total population size that we assume to be constant over a certain time interval
, we have
and the evolution from
S to
I and from
I to
R is given for all
by
The SIR model has only two parameters:
Our forecasting strategy is motivated by the following observation: by allowing the parameters and to be time-dependent, we can find optimal coefficients and that exactly fit any series of infected and removed patients. In other words, we can perfectly fit any observed health series with an SIR model with time-dependent coefficients.
As we explain below, the high fitting power stems from the fact that the parameters and are searched in —the space of essentially bounded measurable functions. For our forecasting purposes, however, this space is too large to give any predictive power, and we need to find a smaller manifold that simultaneously has good fitting and forecasting properties. To this end, we developed a method based on model order reduction. The idea of the method was to find a space with a reduced dimension that can host the dynamics of the current epidemic. This reduced space is learnt from a series of detailed compartmental models based on precise underlying mechanisms of the disease. One major difficulty in these models is the fitting of the correct parameters. In our approach, we do not seek to estimate these parameters; instead, we consider a large range of possible parameter configurations with a uniform sampling that allows us to simulate virtual epidemic scenarios in a longer range than the fitting window . We next cast each virtual epidemic from the family of detailed compartmental models into the family of SIR models with time-dependent coefficients, as explained below. This procedure yields time-dependent parameters and for each detailed virtual epidemic. The set of all such (or ) is then condensed into a reduced basis with a small dimension. We finally use the available health data on the time window to find the functions and from the reduced space that best fit the current epidemic over . Since the reduced basis functions are defined over a longer time range with (e.g., two weeks), the strategy automatically provides forecasts from T to . Its accuracy will be related to the pertinence of the mechanistic mathematical models that have been used in the learning process.
We note that an important feature of our approach is that all virtual simulations are considered equally important in the first stage, and the procedure automatically learns what the best scenarios (or linear combinations of scenarios) to describe the available data are. Moreover, the approach even mixes different compartmental models to accommodate these available data. This is in contrast to other existing approaches which introduce a strong a priori belief regarding the quality of a certain particular model. Note also that we can add models related to other illness and use the large manifold to fit a possible new epidemic. It is also possible to mix the current approach with other purely statistical or learning strategies by means of expert aggregation. One salient difference with these approaches which is important to emphasize is that our method hinges on highly detailed compartmental models which have clear epidemiological interpretations. Our collapsing methodology into the time-dependent SIR is a way of “summarizing” the dynamics into a few terms. One may expect that reducing the number of parameters comes at the cost of losing the interpretability of parameters, and this is true in general. Nevertheless, the numerical results of the present work show that a reasonable tradeoff between the “reduction of the number of parameters” and “interpretability of these few parameters” can be achieved.
The paper is organized as follows. In
Section 2, we present the forecasting method in the case of a single region with a constant population. For this, in
Section 2.1, we briefly introduce the epidemiological models involved in the procedure, namely the SIR model with time-dependent coefficients and more detailed compartmental models used for the training step. In
Section 2.2, after proving that the SIR model with time-dependent coefficients in
is able to fit any admissible epidemiological evolution (as explained below), we present the main steps of the forecasting method. The method involves a collapsing step from detailed models to SIR models with time-dependent coefficients and model reduction techniques. We detail these points in
Section 2.3 and
Section 2.4. In
Section 3, we explain how the method can easily be extended to a multi-regional context involving population mobility and regional health data observations (provided, of course, that mobility data are available). In
Section 3.1, we begin by clarifying that the nature of the mobility data will dictate the kind of multi-regional SIR model to use in this context. In
Section 3.2, we outline how to adapt the main steps of the method to the multi-regional case. Finally, in
Section 4, we present numerical results for the the two pandemic waves of COVID-19 in France in 2020, which took place approximately between February and November 2020. Concluding comments are given in
Section 5, followed by two
Appendix A and
Appendix B that present details about the processing of the data noise and the forecasting error.
2. Methodology for a Single Region
For the sake of clarity, we first consider the case of a single region with a constant population and no population fluxes with other regions. Here, the term region is generic and may be applied to very different geographical scales, ranging from a full country to a department within a country or even smaller partitions of a territory.
2.1. Compartmental Models
The final output of our method is a mono-regional SIR model with time-dependent coefficients as explained below. This SIR model with time-dependent coefficients is evaluated with reduced modeling techniques involving a large family of models with finer compartments proposed in the literature. Before presenting the method in the next section, we here introduce all the models that we use in this paper along with useful notations for the rest of the paper.
2.1.1. SIR Models with Time-Dependent Parameters
We fit and forecast the series of infected and removed patients (dead and recovered) with SIR models where the coefficients
and
are time-dependent:
In the following, we use bold-faced letters for past-time quantities. For example,
for any function
. Using this notation, for any given
and
we denote by
the solution of the associated SIR dynamics in
.
2.1.2. Detailed Compartmental Models
Models involving many compartments offer a detailed description of epidemiological mechanisms at the expense of involving many parameters. In our approach, we use them to generate virtual scenarios. One of the initial motivations behind the present work is to provide forecasts for the COVID-19 pandemic; thus, we have selected the two following models which are specific for this disease, but note that any other compartmental model [
9,
10,
16] or agent-based simulation could also be used.
First model, SEI5CHRD: This model is inspired by the one proposed in [
10]. It involves 11 different compartments and a set of 19 parameters (see
Table 1). The dynamics of the model are illustrated in
Figure 1, and the system of equations reads as follows:
The different parameters involved in the model are described in
Table 2 and detailed in the appendix of [
10].
We denote by
the parameter-to-solution map where
.
Second model, SE2IUR: This model is a variant of the one proposed in [
9]. It involves five different compartments (see
Table 3) and a set of six parameters. The dynamics of the model are illustrated in
Figure 2 and the set of equations is as follows:
We denote by
the parameter-to-solution map where
. The different parameters involved in the model are described in
Table 4.
Generalization: In the following, we abstract the procedure as follows. For any Detailed_Model with
d compartments involving a vector
of
p parameters, we denote by
the parameter-to-solution map, where
is some given time simulation that can be as large as desired because this is a virtual scenario. Note that, because the initial condition of the illness at time 0 is not known, we include the initial condition
in the parameter set.
2.2. Forecasting Based on Model Reduction of Detailed Models
We assume that we are given health data in a time window , where is assumed to be the present time. The observed data are the series of infected people, denoted , and removed people, denoted . They are usually given at a national or a regional scale and on a daily basis. For our discussion, it is useful to work with time-continuous functions, and denotes the piecewise constant approximation in from the given data (and similarly for ). Our goal is to give short-term forecasts of the series in a time window whose size is about two weeks. We denote by and the approximations to the series and at any time .
As mentioned above, we propose to fit the data and provide forecasts with SIR models with time-dependent parameters
and
. The main motivation for using such a simple family is that it possesses optimal fitting and forecasting properties for our purposes, as explained above. We define the cost function
such that
and the fitting problem can be expressed at the continuous level as the optimal control problem of finding
The following result ensures the existence of a unique minimizer under very mild constraints.
Proposition 1. Letand. For any real-valued functionsof class, defined onsatisfying
- (i)
for every,
- (ii)
in nonincreasing on,
- (iii)
is nondeacreasing on,
there exists a unique minimizerto Equation (2).
Proof. One can set
so that
and
which obviously implies that
. □
Note that one can equivalently set
or again
This simple observation means that there exists a time-dependent SIR model which can perfectly fit the data of any (epidemiological) evolution that satisfies properties (i), (ii), and (iii). In particular, we can perfectly fit the series of sick people with a time-dependent SIR model (with a smoothing of the local oscillations due to noise). Since the health data are usually given on a daily basis, we can approximate by approximating the derivatives by classical finite differences in Equation (3).
The fact that we can build and such that implies that the family of time-dependent SIR models is rich enough not only to fit the evolution of any epidemiological series but also to deliver perfect predictions of the health data. It is however important to note that since the are derived exclusively from the data and depend on time, we lose the direct interpretations of these coefficients in terms of the length of the contagious period or the transmission rate that these coefficients present when they are considered constant. The great approximation power comes also at the cost of defining the parameters and in which is a space that is too large to be able to define any feasible prediction strategy.
In order to pin down a smaller manifold where these parameters may vary without sacrificing much in terms of the fitting and forecasting power, our strategy is the following:
Learning phase: The fundamental hypothesis of our approach is that the specialists of epidemiology have understood the mechanisms of infection transmission sufficiently well. The second hypothesis is that these accurate models suffer from the proper determination of the parameters they contain. We thus propose to generate a large number of virtual epidemics, following these mechanistic models, with parameters that can be chosen in the vicinity of the suggested parameter values, including the different initial conditions.
- (a)
Generate virtual scenarios using detailed models with constant coefficients:
Define the notion of a Detailed_Model which is most appropriate for the epidemiological study. Several models could be considered. For our numerical application, the detailed models are defined in
Section 2.1.
Define an interval range
where the parameters
of etailed_Model vary. We call the solution manifold
the set of virtual dynamics over
, namely
Draw a finite training set
of
parameter instances and compute
for
. Each
is a virtual epidemiological scenario. An important detail for our prediction purposes is that the simulations are done in
; that is, we simulate not only in the fitting time interval but also in the prediction time interval. We call
the training set of all virtual scenarios.
- (b)
Collapse every detailed model
into an SIR model following the ideas explained in
Section 2.3. For every
, the procedure gives time-dependent parameters
and
and associated SIR solutions
in
. This yields the sets
- (c)
Compute reduced models:
We apply model reduction techniques using
and
as training sets in order to build two bases
which are defined over
. The space
is such that it approximates all functions
well (or all
can be well approximated by elements of
). In
Section 2.4, we outline the methods we have explored in our numerical tests.
Fitting on the reduced spaces: We next solve the fitting problem (2) in the interval
by searching
(or
) in
(or in
) instead of in
; that is,
Note that the respective dimensions of
and
can be different; for simplicity, we consider them to be equal in the following. Obviously, since
and
, we obtain
but we numerically observe that the function
decreases very rapidly as
n increases, which indirectly confirms the fact that the manifold generated by the two above models accommodates the COVID-19 epidemic well.
The solution of problem (5) gives us the coefficients
and
such that the time-dependent parameters
achieve the minimum (5).
Forecast: For a given dimension
n of the reduced spaces, we propagate in
the associated SIR model, as follows:
The values and for are by construction close to the observed data (up to some numerical optimization error). The values and for are then used for prediction.
Forecast combination/aggregation of experts (optional step): By varying the dimension
n and using different model reduction approaches, we can easily produce a collection of different forecasts, and thus the question of how to select the best predictive model arises. Alternatively, we can also resort to forecast combination techniques [
17]: denoting
as the different forecasts, the idea is to search for an appropriate linear combination
and perform a similar operation for
R. Note that these combinations do not need to involve forecasts from our methodology only; other approaches such as time series forecasts could also be included. One simple forecast combination is the average in which all alternative forecasts are given the same weight
. More elaborate approaches consist in estimating the weights that minimize a loss function involving the forecast error.
Before going into detail regarding some of the steps, three points should be highlighted:
To bring out the essential mechanisms, we have idealized some elements in the above discussion by omitting certain unavoidable discretization aspects. To start with, the ODE solutions cannot be computed exactly but only up to a certain level of accuracy given by a numerical integration scheme. In addition, the optimal control problems (2) and (5) are non-convex. As a result, in practice, we can only find a local minimum. Note, however, that modern solvers find solutions which are very satisfactory for all practical purposes. In addition, note that solving the control problem in a reduced space as in (5) could be interpreted as introducing a regularizing effect with respect to the control problem (2) in the full space. It is to be expected that the search of global minimizers is facilitated in the reduced landscape.
routine-IR and routine-
: A variant for the fitting problem (5) as studied in our numerical experiments is to replace the cost function
by the cost function
In other words, we use the variables
and
from (3) as observed data instead of working with the observed health series
,
. In
Section 4, we refer to the standard fitting method as routine-IR and to this variant as routine-
.
The fitting procedure works both on the components of the reduced basis and the initial time of the epidemics to minimize the loss function; however, for simplicity, this last optimization is not reported here.
2.3. Details on Step 1-(b): Collapsing the Detailed Models into SIR Dynamics
Let
be the solution in
to a detailed model for a given vector of parameters
. Here,
d is possibly large (
in the case of the SEI5CHRD model and
in the case of SE2IUR’s model). The goal of this section is to explain how to collapse the detailed dynamics
into SIRdynamics with time-dependent parameters. The procedure can be understood as a projection of a detailed dynamics into the family of dynamics given by SIR models with time-dependent parameters.
For the SEI5CHRD model, we collapse the variables by setting
Similarly, for the SE2IUR model, we set
Note that , and depend on , but we have omitted this dependence to simplify the notation.
Once the collapsed variables are obtained, we identify the time-dependent parameters
and
of the SIR model by solving the fitting problem
where
Note that problem (7) has the same structure as problem (2), with the difference arising from the fact that the collapsed variables , in (7) play the role of the health data , in (2). Therefore, it follows from Proposition 1 that problem (7) has a very simple solution as it suffices to apply formula (3) to solve it. Note here that the exact derivatives of , , and can be obtained from the Detailed_Model.
Since the solution to (7) depends on the parameter of the detailed model, repeating the above procedure for every detailed scenario for any yields the two families of time-dependent functions and defined in the interval as introduced in Section (4).
2.4. Details of Model Order Reduction
Model order reduction is a family of methods aiming at the approximation of a set of solutions of parametrized PDEs or ODEs (or related quantities) with linear spaces, which are called reduced models or reduced spaces. In our case, the sets to approximate are
where each
is the vector of parameters of the detailed model which take values over
, and
and
are the associated time-dependent coefficients of the collapsed SIR evolution. In the following, we view
and
as subsets of
, and we denote by
and
its norm and inner product. Indeed, in view of Proposition 1,
and
.
Continuing the discussion if
(the same will hold for
), of we measure performance in terms of the worst error in the set
, the best possible performance that reduced models of dimension
n can achieve is given by the Kolmogorov
n-width:
where
is the orthogonal projection onto the subspace
Y. In the case of measuring errors in an average sense, the benchmark is given by
where
is some given measure on
.
In practice, building spaces that meet these benchmarks is generally not possible. However, it is possible to build sequences of spaces for which the error decay is comparable to that given by
or
. As a result, when the Kolmogorov width decays quickly, the constructed reduced spaces will deliver a very good approximation of the set
with few modes (see [
18,
19,
20,
21]).
We next present the reduced models used in our numerical experiments. Other methods could, of course, be considered, and we refer readers to [
22,
23,
24,
25] for general references on model reduction. We continue the discussion in a fully discrete setting in order to simplify the presentation and keep it as close to the practical implementation as possible. All the claims below could be written in a fully continuous sense at the expense of introducing additional mathematical objects such as certain Hilbert–Schmidt operators to define the continuous version of Singular Value Decomposition (SVD).
We build the reduced models using the two discrete training sets of functions
and
from
and
, where
K denotes the number of virtual scenarios considered. The sets have been generated in step 1-(b) of our general pipeline (see
Section 2.2).
We consider a discretization of the time interval
into a set of
points as follows:
where
. Thus, we can represent each function
as a vector of
Q values
and thus assemble all the functions of the family
into a matrix
. The same remark applies for the family
which gives a matrix
.
SVD: The eigenvalue decomposition of the correlation matrix
gives
where
is an orthogonal matrix and
is a diagonal matrix with non-negative entries, which we denote as
and present in decreasing order. The
-orthogonal basis functions
are then given by the linear combinations
For
, the space
is the best
n-dimensional space to approximate the set
in the average sense. We have
and the average approximation error is given by the sum of the tail of the eigenvalues.
Therefore, the SVD method is particularly efficient if there is a fast decay of the eigenvalues, meaning that the set can be approximated by only few modes. However, note that, by construction, this method does not ensure positivity in the sense that may become negative for some although the original function for all . This is due to the fact that the vectors are not necessarily nonnegative. As we will see later, in our study, ensuring positivity especially for extrapolation (i.e., forecasting) is particularly important and motivates the next methods.
Nonnegative Matrix Factorization (NMF, see [
26,
27]): NMF is a variant of SVD involving nonnegative modes and expansion coefficients. In this approach, we build a family of non-negative functions
and we approximate each
with a linear combination
where for every
and
, the coefficients
and the basis function
. In other words, we solve the following constrained optimization problem:
We refer readers to [
27] for further details on the NMF and its numerical aspects.
The Enlarged Nonnegative Greedy (ENG) algorithm with projection on an extended cone of positive functions: We now present our novel model reduction method, which is of interest in itself as it allows reduced models that preserve positivity and even other types of bounds to be built. The method stems from the observation that NMF approximates functions in the cone of positive functions of
since it imposes
in the linear combination (8). However, note that the positivity of the linear combination is not equivalent to the positivity of the coefficients
since there are obviously linear combinations involving very small
for some
j which may still deliver a nonnegative linear combination
. We can thus widen the cone of linear combinations yielding positive values by carefully including these negative coefficients
. One possible strategy for this is proposed in Algorithm 1, which describes a routine that we call Enlarge_Cone. The routine
takes a set of nonnegative functions
as an input and modifies each function
by iteratively adding negative linear combinations of the other basis functions
for
(see line 11 of the routine). The coefficients are chosen in an optimal way so as to maintain the positivity of the final linear combination while minimizing the
-norm. The algorithm returns a set of functions
with
. Note that the algorithm requires the setting of a tolerance parameter
for the computation of the
.
Once we have run Enlarge_Cone, the approximation of any function
is then sought as
We emphasize that the routine is valid for any set of nonnegative input functions. We can thus apply Enlarge_Cone to the functions from NMF but also to the functions selected by a greedy algorithm such as the following:
At step
, we have selected the set of functions
. We next find
In our numerical tests, we call the Enlarged Nonnegative Greedy (ENG) method the routine involving the above greedy algorithm combined with our routine Enlarge_Cone.
Algorithm 1 Enlarge_Cone. |
Input: Set of nonnegative functions . Tolerance . for do Set for do while do end while end for end for Output: |
We remark that, if we work with positive functions that are upper bounded by a constant , we can ensure that the approximations, denoted as , and written as a linear combination of basis functions, will also be between these bounds 0 and L by defining on the one hand, and as we have just done, a cone of positive functions generated by the above family , and on the other hand, considering the base of the functions , to be above the set all greedy elements of the reduced basis, to which we also apply the enlargement of these positive functions. We then require the approximation to be written as a positive combination of the first (positive) functions and for to also be written as a combination with positive components in the second basis.
In this frame, the approximation appears under the form of a least-square approximation with linear constraints on the n coefficients, expressing the fact that the coefficients are nonnegative in the two above transformed bases.
In addition to the previous basis functions, it is possible to include more general/generic basis functions such as polynomial, radial, or wavelet functions in order to guarantee simple forecasting trends. For instance, one can add affine functions in order to include the possibility of predicting with a simple linear extrapolation to the range of possible forecasts offered by the reduced model. Given the overall exponential behavior of the health data, we have added an exponential function of the form
(or
) to the reduced basis functions
(or
) where the real-valued nonnegative parameters
are obtained through a standard exponential regression from
(or
) associated with the targeted profiles of infectious people; that is, the profiles defined in the time interval
that should be extrapolated to [
T,
]. In other words, the final reduced models that we use for forecasting are
Indeed, including the exponential functions in the reduced models gives easy access to the overall behavior of the parameters and ; the rest of the basis functions generated from the training sets catch the higher-order approximations and allow then to improve the extrapolation.
Remark 1. Reduced models on and Instead of applying model reduction to the sets and , as we do in our approach, we could apply the above techniques directly to the sets of solutions and of the SIR models with time-dependent coefficients in and . In this case, the resulting approximation would however not follow SIR dynamics.
3. Methodology for Multiple Regions Including Population Mobility Data
The forecasting method of
Section 2.2 for a single region can be extended to the treatment of multiple regions involving population mobility. The prediction scheme is based on a multi-regional SIR with time-dependent coefficients. Compared to other more detailed models, the main advantage of our approach is that it drastically reduces the number of parameters to be estimated. Indeed, detailed multi-regional models such as multi-regional extensions of the above SEI5CHRD and SE2IUR models from
Section 2.3 require a number of parameters that quickly grows with the number
P of regions involved. Their calibration thus requires large amounts of data which, in addition, may be unknown, very uncertain, or not available. In a forthcoming paper, we will apply the fully multi-regional setting for the post-lockdown period.
The structure of this section is the same as above for the case of a single region. In
Section 3.1, we begin by introducing the multi-regional SIR model with time-dependent coefficients and associated detailed models. As with any multi-regional model, mobility data are required as input data, and the nature and level of detail of the available data motivates certain choices regarding the modeling of the multi-regional SIR (as well as the other detailed models). We then present in
Section 3.2 the general pipeline, in which we emphasize the high modularity of the approach.
3.1. Multi-Regional Compartmental Models
In the spirit of fluid flow modeling, there are essentially two ways of describing mobility between regions:
In a Eulerian description, we take the regions as fixed references for which we record incoming and outgoing travels;
In a Lagrangian description, we follow the motion of people living in a certain region and record their travels in the territory. We can expect this modeling to be more informative regarding the geographical spread of the disease, but it comes at the cost of additional details regarding the home region of the population.
Note that both descriptions hold at any coarse or fine geographical level, in the sense that what we call the regions could be taken to be full countries, departments within a country, or very small geographical partitions of a territory. We next describe the multi-regional SIR models with the Eulerian and Lagrangian descriptions of population fluxes, which form- the output of our methodology.
3.1.1. Multi-Regional SIR Models with Time-Dependent Parameters
Eulerian description of population flux: Assume that we have
P regions and the number of people in region
i is
for
. Due to mobility, the population in each region varies, so
depends on
t. However, the total population is assumed to be constant and equal to
N; that is,
For any
, let
be the probability that people from
i travel to
j at time
t. In other words,
is the number of people from region
i that have travelled to region
j between time
t and
. Note that we have
Since, for any
,
dividing by
and taking the limit
yields
Thus, , which is consistent with our assumption that the total population is constant.
The time evolution of the is known in this case if we are given the from Eulerian mobility data. In addition to these mobility data, we also have the data of the evolution of infected and removed people, and our goal is to fit a multi-regional SIR model that is in accordance with this data. Thus, we propose the following model.
Denoting
,
and
as the number of susceptible, infectious and removed people in region
i at time
t, we first have the relation
Note that from the second relation, it follows that
To model the evolution between compartments, one possibility is the following SIR model:
The parameters
,
,
depend on
t, but we have omitted this dependence for ease of reading. Introducing the compartmental densities
the system equivalently reads
Before going further, some points should be highlighted:
The model is consistent in the sense that it satisfies (9), and when , we recover the traditional SIR model;
Under lockdown measures, and the population remains practically constant. As a result, the evolution of each region is decoupled from the others, and each region can be addressed with a mono-regional approach;
The use of in Equation (11) is debatable. When people from region j arrive in region i, it may be reasonable to assume that the contact rate is ;
The use of in Equation (11) is also very debatable. The probability was originally defined to account for the mobility of people from region j to region i without specifying the compartment. However, in Equation (11), we need the probability of mobility of infectious people from region j to region i, which we denote by in the following. It seems reasonable to think that may be smaller than , because as soon as people become symptomatic and suspect an illness, they will probably not move. Two possible options would be as follows:
- -
We could try to estimate . If symptoms arise, for example, 2 days after infection and if people recover in 15 days on average, then we could say that .
- -
As the above seems to be quite empirical, another option would be to use and absorb the uncertainty in the values of the that can be fitted.
We choose not to add mobility in the R compartment as this does not modify the dynamics of the epidemic spread; only adjustments in the population sizes are needed.
Lagrangian description of population flux: We call the above description Eulerian because we have fixed the regions as a fixed reference. Another possibility is to follow the trajectories of inhabitants of each region, in the same spirit as when we follow the trajectories of fluid particles.
Let
,
, and
now be the number of susceptible, infectious and removed people who are resident in region
. It is reasonable to assume that
is constant in time. However, not all the residents of region
i may be in that region at time
t. Let
be the probability that susceptible people resident in
i travel from region
j to region
k at time
t. With this notation,
is the probability that susceptible people resident at
i remain in region
i at time
t. Similarly, let
be the probability that infectious people resident in
i travel from region
j to
k at time
t. Thus, the total number of susceptible and infectious people that are in region
i at time
t is
We can thus write the evolution over
,
,
as
Note that
is constant, which is consistent with the fact that, in our model,
We emphasize that, to implement this model, the Lagrangian mobility data are required for all .
Notation: In the following, we gather the compartmental variables in vectors
as well as the time-dependent coefficients
For any
and
, we denote by
the output of any of the above multi-regional SIR models. For simplicity, in what follows, we omit the initial condition in the notation.
3.1.2. Detailed Multi-Regional Models with Constant Coefficients
In the spirit of the multi-regional SIR, one can formulate detailed multi-regional versions of more detailed models such as those introduced in
Section 2.1. We omit the details for the sake of brevity.
3.2. Forecasting for Multiple Regions with Population Mobility
Similar to the mono-regional case, we assume that we are given health data in in all regions. The observed data in region i are the series of infected people, denoted as , and recovered people, denoted as . They are usually given at a national or a regional scale and on a daily basis.
We propose to fit the data and provide forecasts with SIR models with time-dependent parameters
and
for each region
i. As in the mono-regional case, we can prove that such a simple family possesses optimal fitting properties for our purposes. In the current case, the cost function reads
and the fitting problem is the optimal control problem of finding
The following proposition ensures the existence of a unique minimizer under certain conditions. To prove this, it is useful to remark that any of the above multi-regional SIR models (see (11) and (12)) can be written in the general form
where, by a slight change of notation, the vectors
,
and
are the densities of population in the case of the Eulerian approach (see Equation (11)). They are classical population numbers in the case of the Lagrangian approach (see Equation (12)).
is the
diagonal matrix with diagonal entries given by the vector
.
is a matrix of size
that depends on the vectors of susceptible and infectious people
,
and on the mobility matrix
. In the case of the Eulerian description,
and in the case of the Lagrangian approach
is the
tensor
. For example, in the case of the Eulerian model (12), the matrix
M reads
Proposition 2. If is invertible for all , then there exists a unique minimizer to problem (13).
Proof. Since we assume that
is invertible for every
, we can set
or equivalently
so that
and
which implies that
. □
Before continuing, let us comment on the invertibility of
which is necessary in Proposition 2. A sufficient condition to ensure this is if the matrix is diagonally dominant row-wise or column-wise. This yields certain conditions on the mobility matrix
with respect to the values of
,
. For example, if
M is defined as in Equation (14), the matrix is diagonally dominant in each row if for every
,
Similarly, if for every
,
then the matrix is diagonally dominant for each column and guarantees invertibility. Note that any of the above conditions is satisfied in situations with little or no mobility where
.
Now that we have exactly defined the set-up for the multi-regional case, we can follow the same steps in
Section 2.2 to derive forecasts involving model reduction for the time-dependent variables
and
.
4. Numerical Results
In this section, we apply our forecasting method to the ongoing COVID-19 pandemic, which spread in the year 2020 in France and started approximately in February. Particular emphasis is placed on the first pandemic wave, for which we consider the period from 19 March to 20 May 2020. Due to the lockdown imposed between 17 March and 11 May, inter-regional population mobility was drastically reduced in that period. Studies using anonymized Facebook data have estimated the reduction to be 80% (see [
28]). As a result, it is reasonable to treat each region independently from the rest, and we apply the mono-regional setting in
Section 2. Here, we focus on the case of the Paris region, and we report different forecasting errors obtained using the methods described in
Section 2. Some forecasts are also shown for the second wave for the Paris region between 24 September and 25 November.
The numerical results are presented as follows.
Section 4.1 explains the sources of health data.
Section 4.2.1 and
Section 4.2.2 explore the results in detail and present a study of the forecasting power of the methods in a two-week horizon.
Section 4.2.3 displays forecasts for the second wave.
Section 4.2.4 aims to illustrate the robustness of the forecasting over longer periods of time. A discussion of the fitting errors of the methods is given in
Appendix A. Additional results highlighting the accuracy of the forecasts are shown in
Appendix B.
4.1. Data
We use public data from Santé Publique France (
https://www.data.gouv.fr/en/datasets/donnees-hospitalieres-relatives-a-lepidemie-de-covid-19/) to get the numbers
of infected and
of removed people. As shown in
Figure 3, the raw data present some oscillations at the scale of a week, which are due to administrative delays for the cases to be officially reported by hospitals. For our methodology, we have smoothed the data by applying a 7 day moving average filter. In order to account for the total number of infected people, we also multiply the data by an adjustment factor
as stated in the literature (indeed, it is said in [
29] that “of the 634 confirmed cases, a total of 306 and 328 were reported to be symptomatic and asymptomatic”, and in [
10], it is claimed that the probability of developing severe symptoms for a symptomatic patient is 0 for children, 0.1 for adults and 0.2 for seniors; thus, if one takes
as an approximate value of these probabilities, one may estimate the adjustment factor as
). Obviously, this factor is uncertain and could be improved in the light of further retrospective studies of the outbreak. However, note that when
, which is the case at the start of the epidemic, the impact of this factor is negligible in the dynamics as can be understood from (3). In addition, since we use the same factor to provide a forecast of hospitalized people, the influence on the choice is minor.
4.2. Results
Using the observations
and
, we apply a finite difference scheme in Formula (3) to derive
and
for
.
Figure 4 shows the values of these parameters as well as the basic reproduction number
for the first pandemic wave in Paris.
We next follow the steps presented in
Section 2.2 to obtain the forecasts. In the learning phase (step 1), we use two parametric detailed models of SE2IUR and SEI5CHRD types to generate training sets
and
composed of
training functions
and
where
are uniformly sampled in the set of parameters
in the vicinity of the parameter values suggested in the literature [
9,
10]. Based on these training sets, we finish step 1 by building three types of reduced models: SVD, NMF and ENG (see
Section 2.4).
Given the reduced bases and , we next search for the optimal and that best fit the observations (step 2 of our procedure). For this fitting step, we consider two loss functions:
: loss function from (1),
: loss function from (6)
We study the performance of each of the three reduced models and the impact of the dimension
n of the reduced model in terms of the fitting error. The presentation of these results is presented in
Appendix A in order not to overload the main discussion. The main conclusion is that the fitting strategy using SVD-reduced bases provides smaller errors than NMF and ENG, especially when we increase the number of modes
n. This is illustrated in
Figure 5 where we show the fittings obtained with routine-
and
for the first wave (from
to
). We observe that SVD is the best at fitting
and
, while ENG produces a smoother fitting of the data. Although the smoother fitting with ENG yields larger fitting errors than SVD, we see in the next section that it yields better forecasts.
4.2.1. Forecasting for the First Pandemic Wave with a 14 Day Horizon
In this section, we illustrate the short-term forecasting behavior of our method. We consider a forecasting window of
days and we examine several different starting days in the course of the first pandemic wave. The results are shown in
Figure 6,
Figure 7,
Figure 8,
Figure 9,
Figure 10,
Figure 11,
Figure 12,
Figure 13 and
Figure 14. Recall that that the forecasting uses the coefficients of the reduced bases obtained by the fitting procedure but also the optimal initial condition of the forecast that minimizes the error on the three days prior to the start of the forecast. For each given fitting strategy (routine-IR, routine-
) and each given type of reduced model (SVD, NMF, ENG), we have chosen to plot an average forecast computed with predictions using reduced dimensions
. This choice is a simple type of forecast combination, but of course other more elaborate aggregation options could be considered. The labels of the plots correspond to the following:
, , , , , are the average forecasts obtained using routine-.
, , , , , are the average forecasts obtained using routine-IR.
The main observation from
Figure 6,
Figure 7,
Figure 8,
Figure 9,
Figure 10,
Figure 11,
Figure 12,
Figure 13 and
Figure 14 is that the ENG-reduced model is the most robust and accurate forecasting method. Fitting ENG with routine-IR or routine-
does not seem to lead to large differences in the quality of the forecasts, but routine-
seems to provide slightly better results. This claim is further confirmed by the study of the numerical forecasting errors of the different methods shown in
Appendix B.
Figure 6,
Figure 7,
Figure 8,
Figure 9,
Figure 10,
Figure 11,
Figure 12,
Figure 13 and
Figure 14 also show that the SVD-reduced model is very unstable and provides forecasts that blow up. This behavior illustrates the dangers of overfitting, namely that a method with high fitting power may present poor forecasting power due to instabilities. In the case of SVD, the instabilities stem from the fact that approximations are allowed to take negative values. This is the reason why NMF, which incorporates the nonnegative constraint, performs better than SVD. One of the reasons why ENG outperforms NMF is the enlargement of the cone of nonnegative functions (see
Section 2.4). It is important to note that, with ENG, the reduced bases are directly related to well-chosen virtual scenarios, whereas SVD and NMF rely on matrix factorization techniques that provide purely artificial bases. This makes forecasts from ENG more realistic and therefore more reliable.
4.2.2. Focus on the Forecasting with ENG
For our best forecasting method (routine-
using ENG), we plot in
Figure 15,
Figure 16,
Figure 17,
Figure 18,
Figure 19,
Figure 20,
Figure 21,
Figure 22 and
Figure 23 the forecasts for each dimension
to 10. The plots give the forecasts on a 14 day-ahead window for
,
, and the resulting evolution of the infected
I and removed
R. We see that the method performs reasonably well for all values of
n, proving that the results of the previous section with the averaged forecast are not compensating for spurious effects which could occur for certain values of
n. We have chosen to display the inaccurate forecasts from 3 April, 7 April, and 11 April as they are among the worst predictions obtained using this method; however, it is important to mention that, despite the lack of accuracy in these cases, plausible epidemic behaviors remain, with different but realistic evolutions for
and
compared to the actual evolution. Note that the method was able to predict the peak of the epidemic several days in advance (see
Figure 15). We also observe that the prediction on
is difficult at all times due to the fact that
presents an oscillatory behavior. Despite this difficulty, the resulting forecasts for
I and
R are very satisfactory in general.
4.2.3. Forecasting of the Second Wave with ENG
The review took place during the month of November 2020 as the second COVID-19 pandemic wave hit France. We took advantage of this to enlarge the body of numerical results, and we provide some example forecasts with ENG for this wave in
Figure 24,
Figure 25 and
Figure 26. As the figures illustrate, the method provides very satisfactory forecasts in a 14 day-ahead window. We again observe a satisfactory prediction of the second peak (
Figure 24,
Figure 25 and
Figure 26) and the same difficulty in forecasting
due to the oscillations in
, but this has not greatly impacted the quality of the forecasts for
I and
R.
4.2.4. Forecasts with ENG with a 28 Day-Ahead Window
To conclude this section, we extend the forecasting window to 28 days instead of 14 and study whether the introduced ENG method still provides satisfactory forecasts. As shown in
Figure 27,
Figure 28,
Figure 29,
Figure 30,
Figure 31 and
Figure 32, the results of the methods are quite stable for large windows. This shows that, in contrast to standard extrapolation methods using classical linear or affine regressions, the reduced basis catches the dynamics of
and
not only locally but also at extended time intervals.
5. Conclusions
We have developed an epidemiological forecasting method based on reduced modeling techniques. Of the different strategies that have been explored, the one that outperforms the rest in terms of robustness and forecasting power involves reduced models that are built with an Enlarged Nonnegative Greedy (ENG) strategy. This method is novel and of interest in itself as it allows reduced models that preserve positivity and even other types of bounds to be built. Despite the fact that ENG does not have optimal fitting properties (i.e., interpolation properties), it is well suited for forecasting since, due to the preservation of the natural constraints of the coefficients, it generates realistic dynamics with few modes. The results have been presented for a mono-regional test case, and we plan to extend the present methodology to a multi-regional setting using mobility data.
Last but not least, we would like to emphasize that the developed strategy is very general and could be applied to the forecasting of other types of dynamics. The specificity of each problem may, however, require adjustments in the reduced models. This is exemplified in the present problem through the construction of reduced models that preserve positivity.