1. Introduction
Even if a still unanswered question is whether or not the accurate, reliable prediction of individual earthquakes is a realistic scientific goal, the possibility of forecasting future earthquakes exists. The two major examples concern the estimation of the occurrence probability of large shocks over a very long temporal interval (decades up to centuries) and the estimation of the aftershock occurrence rate after a large earthquake. Neither of the two cases is relevant in predicting the occurrence of an impending large earthquake but both examples provide very useful information on mitigating the impact of earthquakes that are likely to occur. As a matter of fact, the first example, usually defined as long-term (LT) seismic forecasting, is probably the most relevant from an engineering point of view, such as urban planning and building constructions: It allows one to address questions such as the maximum magnitude expected in a given area for the next years. Concerning the second example, usually defined as post-seismic Short-Term Aftershock (STA) forecasting, many events (the aftershocks) are always observed soon after the occurrence of a strong shock (the main shock). Aftershocks can attain sizes comparable to their triggering mainshock and can be very dangerous since they impact buildings already damaged by the previous shocks.
This review is focused on STA forecasting that can be potentially very efficient. Indeed the organization in time, space and energy of aftershocks follows well established empirical laws such as the Gutenberg–Richter (GR) and the Omori–Utsu (OU) law [
1,
2], which can be implemented in forecasting models. The GR law states that the magnitude distribution of earthquakes is an exponential function
, and the OU law characterizes the power law decay of the aftershock rate as function of the time
t since the main shock.
Even if the LT and STA forecasting act on two very different time scales, the two problems are intimately related. In the most simple description, seismic occurrence can be viewed as the superposition of two different stochastic processes: background seismicity responsible for mainshocks, which are the target of the LT forecasting, and aftershock occurrence, which is the target of STA. Hence, to achieve an accurate LT forecasting method a so-called declustering procedure is necessary, which allows one to isolate the two processes by means of a detailed knowledge of aftershock features. A clear example is the Epidemic Type Aftershock Sequence (ETAS) model introduced by Ogata [
3] and probably representing nowadays the most popular model for STA as well as among the most efficient tools for LT forecasting. Studies of STA forecasting models, such as the ETAS model or more simple models implementing the OU law, have shown [
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14] that the incompleteness of datasets strongly affects the estimation of model parameters. This effect is more relevant in the first part of aftershock sequences when many earthquakes, in particular small ones, are not recorded and therefore not reported in seismic catalogs. This is mainly caused by the overlap of the signal of individual earthquakes in the seismic records. At the same time, incompleteness is also produced by the overload of processing facilities, due to a very large number of events in a narrow temporal window, and the damage caused by the mainshock to the seismic stations. Because of these difficulties, in many cases, operational probability forecasts only start more than 24 h after the mainshock [
15].
In this review, we explore the problem of incompleteness of instrumental datasets focusing in particular on the so-called Short Term Aftershock Incompleteness (STAI). This is the main subject of
Section 2. In
Section 3, we review recent results on the influence of STAI on the estimation of parameters of STA forecasting models.
Section 4 is then devoted to show that STAI is an intrinsic property of seismic catalogs which is not related to the efficiency of the seismic network. We conversely show that the main mechanism responsible for STAI is the overlap of aftershock coda waves with the waveforms of other events which obscure small aftershocks that occur close in time after larger ones. In
Section 5, we show some approaches recently proposed to take explicitly into account this “obscuration” effect within the ETAS model. These approaches, however, are not simple to be implemented in real-time automatic procedures for aftershock forecasting. This is the topic of
Section 6, which presents two different procedures developed to provide accurate STA forecasting, several minutes after the occurrence of a mainshock: the Omi et al. method [
7,
9,
10] and the Lippiello et al. method [
16,
17]. The test of these two methods in retrospective studies is presented in
Section 6 and final conclusions are drawn in the last section.
2. Catalog Incompleteness
Catalog completeness is usually quantified in terms of a magnitude threshold (or lower cut-off)
defined as the magnitude above which all events are identified and included in the catalog. An accurate estimate of
is fundamental in seismic forecasting. A too high value, discarding usable data, leads to loss information by under-sampling. Conversely, a too low value leads to an unreliable estimation of parameter values and thus to a biased analysis because of the incomplete dataset. A standard way of estimating
is to find the minimum magnitude above which the best fit with the GR law is obtained. The value of
clearly depends on the ability to filter noise and on the distance between the earthquake epicenter and the seismic stations necessary to trigger an event declaration in a catalog. Instrumental data from Taiwan seismicity, for example, give [
18] at a given location
,
where
is the distance in kilometers between the epienter and the position
of the third nearest seismic station. In
Figure 1, we present the
map for the Southern California obtained in [
19] via the method of Amorése [
20]. In particular, we observe a region in the central part of Southern California with a higher density of seismic stations, characterized by
. This region, defined as Region 1, contains the
of
events recorded in the entire catalog. The remaining Southern California region (defined as Region 2) has a completeness magnitude starting from
and becoming as large as
near the borders. A similar behavior is found if
is evaluated according to the method of Schorlemmer and Woessner [
21].
We stress that
estimated from Equation (
1) is a static quantity, controlled by the number of seismic stations, and we define it as “the static completeness magnitude”. On the other hand, instrumental data show that the
value, inside a given region, changes with time reaching much larger values in the first part of the aftershock sequence. As already anticipated in the Introduction, the dependence of the completeness magnitude
on the time
t since the main shock occurrence is usually termed Short Term Aftershock Incompleteness (STAI). Results in [
22,
23,
24] give a completeness magnitude
which depends logarithmically on the time
t since the main shock
where
is the main shock magnitude and
d and
are fitting parameters. We refer to Equation (
2) as the Kagan–Helmstetter formula with the best fitting parameters
and
days, when time is measured in days. In
Figure 2, we plot the experimental aftershock magnitude distribution evaluated for different temporal intervals after the
Landers earthquake, in Southern California. Experimental results show a magnitude distribution with an about flat for values
, whereas curves appear parallel on a semi-logarithmic scale for
consistently with a GR law with
. The crossover magnitude
is in agreement with Equation (
2).
In a different approach [
5,
7,
9], STAI is taken into account by considering a magnitude distribution
given by the GR law multiplied by the detection rate function
, which is represented by an error function
In the above equation, the function
represents the
detection magnitude and
represents the range of the magnitude of partially detected earthquakes, i.e., at time
t, only
of the events with
are expected to be detected whereas more the
of events are expected to be detected if
. A reasonable definition therefore corresponds to assume
. In particular, Ogata and Katsura [
5] proposed that
obeys the law
where the
are fitting parameters. On the other hand, in a series of papers, Omi et al [
7,
8,
9,
10,
15] developed an elegant method to obtain a non parametric fit of the function
and an estimate of
from the occurrence times and magnitudes of all recorded events in a giving learning period.
In
Figure 3, we plot the results by Omi et al. [
9] for
and
for three aftershock sequences in Japan. These results are compared with the Ogata–Hirata formula (Equation (
5)) and the Kagan–Helmstetter formula (Equation (
2)).
Figure 3 shows that the Omi and the Ogata–Hirata models give similar behavior for
and are able to capture the time variation of the detection rate. In contrast with these two models, since the parameters of the Kagan–Helmstetter formula are fixed for all sequences, it cannot reproduce the diverse recovering dynamics of the completeness magnitude that considerably depends on each aftershock sequence. The comparison of the forecasting skill of these three methods, for 38 Japan aftershock sequences, shows that the Omi method performs slightly better than the Ogata–Hirata methods and much better than a Kagan–Helmstetter formula [
9].
3. The Influence of STAI on Model Parameters
For a complete dataset, one expects that the rate of aftershocks
with magnitude larger than a threshold value
occurring after a time
t following a mainshock of magnitude
can be obtained by combining the GR law and the OU law
According to the productivity law [
25],
K depends on the main shock magnitude and Equation (
6) can be written as
As already observed in [
2], missing small events in the early stage of the aftershock sequence causes the instability of the estimate of the parameters
in Equation (
6). A problem which becomes particularly relevant at the beginning of aftershock sequences when the completeness magnitude after large earthquakes can temporarily increase by several units [
4,
22,
26,
27]. For this reason, long and short term forecasts usually present some corrections which take into account STAI [
6,
28,
29].
Incompleteness, in particular, can make the
c-value measured from instrumental catalogs
much larger the “true”
c-value in the OU law (Equation (
6)). Indeed, restricting to aftershocks with magnitudes larger than a reference value
, if events with magnitudes
are not recorded, the measured
c-value can be obtained from Equation (
2) after setting
, which leads to
It is evident that this quantity depends on the parameters of Equation (
2) but is not related to the
c-value of the OU law. Alternatively, an estimate of
can be obtained from Equation (
5) after setting
. As a consequence, the incompleteness at short times hides the true value of
c that in turn introduces a strong bias in the evaluation of the parameters
and
in Equation (
6), strongly affecting routines for short term aftershock forecasting at time
.
3.1. The Influence of STAI on the ETAS Parameters
As anticipated in the Introduction, the ETAS model is probably, nowadays, the most popular one for STA forecasting. The assumptions of the ETAS model include: (1) yhe background seismicity is a stationary Poisson process that depends on the position
,
; (2) every event, whether it is a background or a triggered one by a previous event, triggers its own off-spring independently; (3) the expected number of direct off-springs is an exponential function of the magnitude of the mother event (productivity law); and (4) the time lags between triggered events and the mother event follow the OU law. According to these assumptions, the occurrence rate of events with magnitudes
at the position
at time
t is given by
where the sum extends over all events with magnitude
, epicentral coordinate
and occurrence time
and
with
, which is the epicentral distance. The function
is a spatial kernel that explicitly depends on the triggering magnitude
and
is the time independent contribution due to background seismicity.
The influence of STAI on the estimates of the ETAS parameter was addressed by Zhuang et al. [
13] in the case of the 15 April 2016, Kumamoto earthquake sequence in Japan. Under the assumption that earthquake magnitudes are independent of their occurrence times, Zhuang et al. [
13] replenished the short-term missing data of small earthquakes by using a bi-scale transformation. They then compared the maximum likelihood estimate of the ETAS parameters of the recorded dataset in the JMA catalog with the replenished one, considering only events above a lower magnitude threshold
. Results plotted in
Figure 4, as function of
, show that, when the magnitude threshold
, which is approximately the static completeness magnitude of the JMA catalog, the estimated ETAS parameters are about the same for both datasets. Conversely, important differences are found for values of
. For the replenished dataset, the estimated background rate
decreases roughly exponentially when the cut-off magnitude is increased, consistently to what is expected according to the GR law (
Figure 4a). The original dataset, conversely, exhibits a flatter behavior, indicating the absence of small magnitude events. Concerning the other parameters, the most striking feature is that in the replenished dataset all parameters only weakly depend on
, as expected, whereas we observe a non-trivial dependence on
in the JMA catalog.
The results of Zhuang et al. [
13] indicate that the estimate of ETAS parameters from the original dataset, when one considers a lower magnitude threshold
, leads to non-correct results. A similar conclusion was reached by Seif et al. [
14] who studied how the ETAS parameters, obtained by the iterative approach of Zhuang et al. [
30], depends on the lower magnitude threshold
. In particular, Seif et al. [
14] investigated two simulated ETAS catalogs: a complete one which implements the ETAS parameters estimated from the Southern California catalog and an incomplete one where aftershocks of mainshocks with
were removed if their magnitude was smaller than
given in Equation (
2). Results plotted in
Figure 5 show that for sufficiently larger values of
, the parameter inversion procedure does not give the true values of
and
p used to generate synthetic catalogs. Seif et al. [
14] attributed the observed discrepancy to the fact that aftershocks triggered by events with
are erroneously identified as direct aftershocks of some previous larger earthquake. This widens the distribution of direct aftershocks leading to a smaller
p-value. At the same time, because of the anticorrelation between
and
p,
is overestimated.
Figure 5, in particular, shows a striking difference between the estimated parameters in the complete and the incomplete catalogs. However, this difference tends to disappear for increasing
indicating that the influence of aftershock incompleteness is not significant for
.
Results of
Figure 4 and
Figure 5 indicate that using a lower magnitude threshold
below the completeness level, especially for some parameters, can lead to incorrect prediction. Unfortunately, it is not simple to establish a strict correspondence between the degree of incompleteness of the catalog and the error expected in the estimate of parameters.
3.2. Is STAI Related to the Static ?
As explained in
Section 2 the static
is a local quantity which depends on the local density of the seismic network
, as illustrated by Equation (
1) and
Figure 1. The influence of the density
on the STAI was addressed by de Arcangelis et al. [
31] by investigating the
-value in the two sub-regions of Southern California illustrated in
Figure 1. As already explained in
Section 2, the inner region (Region 1) comprises a high value of
and a static
. Conversely, a small
is present in the external region (Region 2) and the static
, with values of
close to the borders. To obtain an estimate of
in each sub-region, de Arcangelis et al. [
31] measured the aftershock daily rate
defined as the number of aftershocks with magnitude larger than
occurring at a temporal distance
t after their triggering main shock with magnitude
, divided by the number of mainshocks with magnitude
. Three different values of
and
were considered. In this study, mainshock–aftershock couples were identified according to the Baiesi–Paczusky (BP) declustering criterion [
32,
33,
34] using the same parameters adopted by Moradpour et al. [
35] and Hainzl [
12]. In particular, only aftershocks identified as direct descendants of the mainshock were included in the analysis.
The results (
Figure 6) show that the aftershock rate clearly depends on the magnitude difference
in both Region 1 and Region 2. In particular, de Arcangelis et al. [
31] divided time by
obtaining that data for different values of
and
, inside each sub-region, exhibits the scaling collapse
(
Figure 7a). It is evident from
Figure 7a that the Omori decay
sets in when
becomes larger than a given value
, different between the two regions. Since the
can be obtained from the time such that the Omori decay
sets in,
Figure 7a gives
and one recovers Equation (
8) after the identification
. In particular the best fit gives
and
inside Region 1 and
and
inside Region 2. This leads to a counterintuitive behavior with a
-value being larger inside Region 1 even if the static
is significantly smaller inside Region 1 than in Region 2. Conversely a smaller
-value is found in Region 2 when the static
is larger. This result clearly indicates that
is not related to
and that STAI cannot be reduced by increasing the density of the seismic station thus suggesting that STAI originates from a different mechanism (see next section). The same conclusion can be also obtained from the measurement of the correlation between magnitude according to the method proposed in [
19,
36,
37,
38]. This analysis [
19,
31] has shown significantly larger magnitude correlations in Region 1 than in Region 2.
4. The Origin of STAI and the Envelope Function
Results of the previous Section (
Section 3.2) suggest that STAI is an intrinsic property of seismic catalogues not related to density of the seismic stations. This conclusion is strongly supported by the study of the envelope function
after several mainshocks that occurred in Greece and Italy in the last ten years [
16]. More precisely, the envelope function
is obtained from the ground velocity recorded during the first days after the mainshock. The signal of each component is filtered by means of a two-pass Butterworth filter in the range
Hz, the envelope of each signal is computed and the signals of the three components are superimposed.
is finally defined as the logarithm of the resulting signal. This quantity was introduced by Peng et al. [
26] to identify aftershocks not reported in the JMA catalog during the first minutes after the main shock. The idea is that the occurrence of an aftershock must produce a double peak in
corresponding to the coupled pair of P and S arrivals. The local magnitude of the event is given by
, where
is the maximum in
and the constant depends on the epicentral distance from the recording station, related to the S-P time difference.
Considering the evolution of
after a mainshock, occurred at the time
, Lippiello et al. [
16] found that the envelope function never goes below a given value
which is a logarithmic decreasing function of time (
Figure 8)
As a consequence, even very accurate analyses of post seismic waveforms, even those which employ sophisticated matched filter detection algorithms [
39,
40], do not allow one to identify small events which produce peaks smaller than
. This reflects a completeness magnitude
that depends on the time after the mainshock with a functional dependence similar to
and, therefore, small events cannot found and catalogs are intrinsically incomplete.
To understand the mechanism responsible for the existence of , a closer inspection of the envelope function after all mainshocks reveals the existence of two characteristic times: and . The first time is of the order of some seconds, whereas is of order of some minutes, and three distinct regimes are observed:
The same three regimes have been found for other mainshocks in Southern California and in Italy [
16]. The first two regimes can be easily associated to the mainshock waveform, which can be modeled as
, where
is the mainshock envelope waveform. Experimental results suggest an initial linear increase of
[
41] followed by a fast decay consistent with an exponential function
[
42].
Figure 8 indicates that in the intermediate regime
, with
of the order of few minutes, the envelope waveform is more consistent with a power law decay as proposed by Lee et al. [
43]. Under these assumptions, the behavior of
up to the time
can be modeled as
with the time
representing the typical duration of the mainshock signal, leading to
The existence of the third regime, previously enlightened by Sawazaki and Enescu [
44], can be interpreted taking into account that not only the main shock but each aftershock of magnitude
, occurred at time
, produces a signal following the relation
and one therefore expects a theoretical envelope of the form
where the maximum must be evaluated for all aftershocks with occurrence times
.
Numerical Generation of the Envelope Function
To verify that Equation (
15) reproduces the experimental findings, Lippiello et al. [
16] started from a mainshock with magnitude
occurring at time
and assumed that the aftershock rate follows the OU law (Equation (
6)). Since
p-values usually have small fluctuations among different aftershock sequences [
45], Lippiello et al. [
16] assumed a fixed value of
p (
) and after choosing different values of
K and
c, they generated an aftershock sequence according to Equation (
6) for a temporal window of three days. To each aftershock is then associated a magnitude randomly extracted from the GR law. After fitting the value of
from the experimental
, the key assumption is that a magnitude
aftershock, occurring at time
, generates a seismic signal with envelope
with
and
. The synthetic
is then obtained from Equation (
15) and a vertical shift is finally applied in order to have the mainshock peak in
equal to the experimental
. The numerical parameters
, implemented in the OU law (Equation (
6)) are then tuned in order to reach a good agreement between
and the experimental
, according to the procedure described in
Section 6.2. Results of
plotted as orange lines in
Figure 8 show that it is possible to generate a synthetic envelope reproducing the experimental one in all the three regimes. The above results indicate that since each aftershock produces its own coda waves which decay as a power law with exponent
q, the overlap of coda waves generated by subsequent aftershocks causes the existence of a lower signal
which decays as a power law with an exponent
(Equation (
13)). The same agreement between
and
is recovered for other mainshocks
recorded in Greece, Italy and Southern California [
16].
We wish to stress that the mainshock peak
, as well as aftershock peaks
in Equation (
15), strongly depends on the distance of the recording station from the mainshock epicenter and on site effects. In addition, the functional form of
can be different at different stations. As a consequence both
and
are different at different stations but, under the hypothesis that aftershocks occur not too far from the mainshock hypocenter, the values of
K and
c providing the best agreement between
and
should be the same for all stations.
5. The ETASI Model
In the previous section, we have shown that STAI is mostly due to the overlap among aftershock coda waves. This ingredient can be incorporated in the ETAS model by multiplying the ETAS occurrence rate
in Equation (
9) by a detection function
The detection rate can be still described by an error function as in Equation (
4) and we define the model described by Equation (
16) as the ETAS Incomplete (ETASI) model. The main difference with Equation (
3) is that in this approach the detection function
depends on the history of all previous earthquakes
, with
. More precisely, in Equation (
3), the
detection function
depends only on the time and magnitude of the main shock whereas in Equation (
16) each event can obscure the recording of subsequent earthquakes.
We observe that the ETASI model differs form the procedure adopted by Seif et al. [
14] who generated incomplete ETAS catalogs by removing only aftershocks of mainshock with
. In the ETASI model, conversely, any event can obscure subsequent earthquakes independently of its magnitude.
The simplest choice for the detection function
was proposed by Hainzl [
12] and corresponds to an error function with
and
if
, whereas
for
, where
is a constant blind time. This corresponds to the hypothesis that each earthquake hides all subsequent smaller events occurring at temporal distances smaller than
. Notwithstanding the simplicity of this functional form of
, as already proposed by Hainzl [
12], this model, defined as ETASI1 in the following, leads to non-trivial temporal patterns of the aftershock occurrence.
The hypothesis of a constant blind time allows one to achieve an analytical evaluation of
[
12]. Indeed, the blind time
also represents the minimum temporal distance between two subsequent earthquakes reported in a catalog and this leads to a maximum detectable rate
. As a consequence, since the “true” aftershock rate is a decreasing function of the time
t after the mainshock occurrence (Equation (
6)), the measured
corresponds to the “true” aftershock rate only if
, a condition which is always fulfilled at large times. Conversely, at small times, when the “true” aftershock rate is larger than
, the measured
exhibits a constant behavior
. Accordingly, the
-value can be identified as the time such as
, and assuming
Equation (
7) gives
giving
, which for
coincides with Equation (
8):
and
.
The ETASI1 model can be implemented numerically via a two step process. At the first step, standard ETAS catalogs are simulated and, at the second step, all events that occurred at a temporal distance smaller than
after a larger event are removed from the catalog. de Arcangelis et al. [
31] implemented different values of
and analyzed the ETASI1 catalog by the same BP declustering procedure applied to the instrumental catalog. As in
Figure 6, the aftershock daily rate
for the ETASI1 catalog has been evaluated for different mainshock magnitudes
, different thresholds
and different
values. This study has shown that the
-value follows Equation (
8) with
as illustrated in
Figure 7b where
is plotted as a function of
with
and
. Data for different
and
and the same
collapse onto the same master curve
, as for the instrumental catalog (
Figure 7a). Concerning the value of
, de Arcangelis et al. [
31] observed that the larger the value of
implemented in ETAS simulations the larger was the value of
fitted from the decay of
. Results plotted in the inset of
Figure 7b show that
, becomes more positive for increasing
confirming the strong correlation between
and
. In particular, we observe that the dependence of
on
is consistent with Equation (
18) only for small values of
. Deviations from Equation (
18) can be attributed to the cascading process implemented in the ETAS model. Indeed, aftershocks of higher order generation are also followed by a blind time which eventually hides aftershocks of previous generations. This causes a larger total blind time compared to the situation when higher order generation aftershocks are not considered, as in Equation (
18).
The comparison between the data collapse observed for the ETASI1 catalog (
Figure 7b) with the one observed for the instrumental Southern California catalog (
Figure 7a) suggests that the larger value
inside Region 1 must be attributed to a larger productivity (larger
) of that region. This is in agreement with the behavior of
(
Figure 6) for times
when the “true” OU decay
is expected. Indeed, it is evident that, when
,
in Region 1 is systematically larger than in Region 2.
We further observe that the scaling function
presents clear deviations from the OU prediction
in the intermediate temporal regime. We attribute these deviations to the cascading process which can produce a more gradual decrease of the aftershock number from the initial plateau compared to the situation when higher order generation aftershocks are not taken into account [
12]. A better fit for
in numerical and instrumental catalogs is provided by
obtained by Lippiello et al. [
46] under a dynamical scaling assumption [
38,
45,
47,
48,
49,
50,
51].
5.1. ETASI2
A more refined expression for
within the ETASI model (Equation (
16)) is proposed in [
31] and corresponds to the so called ETASI2 model. The idea is that the
detection function follows the same decay of the envelope function of a single earthquake and according to Equation (
12) this corresponds to the assumption that
where the maximum is evaluated over all events with magnitude
occurred at time
. The model is numerically implemented in [
31] taking for the detection rate function
an error function as in Equation (
4) with
. This corresponds to the two-step procedure illustrated in the previous section with the removal from the original ETAS catalog of all events with magnitude
m and occurrence time
t such that
. A finite value of
is considered in [
52].
In de Arcangelis et al. [
31], the coefficient
q in Equation (
19) is taken as a model parameter and its value has been tuned in order to achieve the best agreement between the organization of aftershocks in ETASI2 and instrumental catalogs. This study showed that the ETASI2 model provides a more accurate description of aftershock occurrence, with respect to the ETASI1 model, and in particular it better captures the correlation between subsequent magnitudes observed in instrumental catalogs. In particular the agreement between instrumental and ETASI2 catalogs is obtained by setting a
value, in the ETASI2 simulations, significantly larger inside Region 1 of Southern California (
Figure 1) than Region 2. As a consequence, de Arcangelis et al. [
31] proposed that the value of
which provides the best overlap between ETASI2 and instrumental catalogs can be interpreted as the best estimate for the true productivity coefficient
in each region.
5.2. Dynamical Scaling ETAS Model
A model alternative to the ETASI has been proposed on the basis of a dynamical scaling relation between time and energy [
19,
36,
38,
46,
47]. Within this hypothesis, different from the general assumption of the ETAS model [
3,
53,
54], time and magnitude are not independent quantities but the magnitude difference fixes a characteristic time scale for aftershock rate relaxation. Deviations from the GR law are a natural consequence of this assumption with a completeness magnitude depending on time in agreement with what is observed in experimental data (Equation (
2)). The study of the maximum likelihood [
51] has shown that this method provides a more accurate description of the aftershock rate decay than the ETAS model.
7. Conclusions
In this review article, we show that, in the first part of aftershock sequence, incompleteness is an intrinsic property of seismic data. Indeed, the overlap of seismic signals makes the envelope function always greater than . This lower threshold can be related to the minimum aftershock magnitude identifiable at time t since the main shock and indicates that it is feasible to obtain more accurate catalogs but it is impossible to reach completeness levels below . This result also provides an explanation for the dependence of on the time elapsed from the main shock occurrence. We illustrate how the incompleteness affects the estimate of the parameters of STA forecasting models and we present some models which take it explicitly into account. In particular, we present an interpretation of the mechanisms responsible for the existence of in terms of the overlap of coda-waves generated by each individual aftershock: The combination of the decay of the aftershock rate (OU law) with the power law relaxation of coda waves produces an envelope function , which, on average, depends logarithmically on the time since the main shock. We illustrate the bias induced in the estimate of model parameters because of the incompleteness of the instrumental catalog. A deeper investigation is necessary to establish a quantitative relationship between the expected error in the estimate of model parameters and the degree of incompleteness of the catalog.
We also show that the parameters of the logarithmic dependence of
appear strictly related to the parameters of the OU. We then describe a procedure based on this observation and developed in [
16] to extract the OU law parameters from a fitting procedure applied to the experimental
. This approach overcomes all problems related to event identification and location since seismic hazard is evaluated directly from the envelope function
without any information on occurrence times, magnitudes and locations of earthquakes producing the observed signal.
We also illustrate the Omi method [
7,
9,
10,
15] proposed to overcome the problems of STA forecasting caused by the incompleteness of instrumental data. We show that the method, based on the detection rate function, provides reliable aftershock forecasting on the basis of incomplete instrumental catalogs.
Summarizing, we review very recent proposals to develop real-time systems for automatic aftershock forecasting. The above procedures have been up to now tested retrospectively but appear already suitable to be implemented in prospective tests. These methods apply the OU law or the ETAS model without taking into account the spatial variability of seismicity. Future developments should correspond to space-time models providing a space dependent forecasting, particularly useful in aftershock sequences with a complex spatial distribution.