**Surface-Rupturing Historical Earthquakes in Australia and Their Environmental E**ff**ects: New Insights from Re-Analyses of Observational Data**

### **Tamarah R. King 1,\*, Mark Quigley <sup>1</sup> and Dan Clark <sup>2</sup>**


Received: 2 August 2019; Accepted: 16 September 2019; Published: 20 September 2019

**Abstract:** We digitize surface rupture maps and compile observational data from 67 publications on ten of eleven historical, surface-rupturing earthquakes in Australia in order to analyze the prevailing characteristics of surface ruptures and other environmental effects in this crystalline basement-dominated intraplate environment. The studied earthquakes occurred between 1968 and 2018, and range in moment magnitude (Mw) from 4.7 to 6.6. All earthquakes involved co-seismic reverse faulting (with varying amounts of strike-slip) on single or multiple (1–6) discrete faults of ≥ 1 km length that are distinguished by orientation and kinematic criteria. Nine of ten earthquakes have surface-rupturing fault orientations that align with prevailing linear anomalies in geophysical (gravity and magnetic) data and bedrock structure (foliations and/or quartz veins and/or intrusive boundaries and/or pre-existing faults), indicating strong control of inherited crustal structure on contemporary faulting. Rupture kinematics are consistent with horizontal shortening driven by regional trajectories of horizontal compressive stress. The lack of precision in seismological data prohibits the assessment of whether surface ruptures project to hypocentral locations via contiguous, planar principal slip zones or whether rupture segmentation occurs between seismogenic depths and the surface. Rupture centroids of 1–4 km in depth indicate predominantly shallow seismic moment release. No studied earthquakes have unambiguous geological evidence for preceding surface-rupturing earthquakes on the same faults and five earthquakes contain evidence of absence of preceding ruptures since the late Pleistocene, collectively highlighting the challenge of using mapped active faults to predict future seismic hazards. Estimated maximum fault slip rates are 0.2–9.1 m Myr−<sup>1</sup> with at least one order of uncertainty. New estimates for rupture length, fault dip, and coseismic net slip can be used to improve future iterations of earthquake magnitude—source size—displacement scaling equations. Observed environmental effects include primary surface rupture, secondary fracture/cracks, fissures, rock falls, ground-water anomalies, vegetation damage, sand-blows/liquefaction, displaced rock fragments, and holes from collapsible soil failure, at maximum estimated epicentral distances ranging from 0 to ~250 km. ESI-07 intensity-scale estimates range by ± 3 classes in each earthquake, depending on the effect considered. Comparing Mw-ESI relationships across geologically diverse environments is a fruitful avenue for future research.

**Keywords:** Intraplate earthquake; surface rupture; Australian earthquakes; earthquake environmental effects; reverse earthquake; ESI 2007 scale; historical and recent earthquakes

#### **1. Introduction**

In the 50 years between 1968 and 2018 Australia experienced eleven known surface rupturing earthquakes (Table 1, Figure 1). Studies of Australian surface rupturing earthquakes have contributed to improvements in our collective understanding of intraplate earthquake behavior, including rupture recurrence, in stable continental regions (SCR) [1–5] and empirically-derived scaling relationships for reverse earthquakes [6–9]. This paper reviews available published literature on historic surface ruptures (Tables 1 and 2) and collates geological data (Tables 3 and 4, Figures 1 and 2), seismological data and analyses (Table 5), surface rupture measurements (Table 6), environmental damage (Table 7), and paleoseismic data (Table 8) (Figures 3–10). We re-evaluate and reconsider rupture and fault characteristics in light of new data (e.g., geophysical and geological) using modern analysis techniques (e.g., environmental seismic intensity scale (ESI-07) [10]) and new or updated concepts in earthquake science since the time of publication (e.g., paleoseismology, SCR earthquake recurrence).


**Table 1.** Summary of known historic Australian surface rupturing earthquakes and relevant references.

Other literature with relevant analysis or data regarding historic ruptures: [80–98].

**Figure 1.** Map of Australia showing locations of historic surface rupturing events, continental scale crustal divisions [99], onshore historic seismology >4.0 (1840–2017) [11], simplified crustal stress trajectory map [100], GA neotectonic features database [95], recognized seismic zones [101,102] and specific crustal provinces relevant for surface rupture events (Table 3) [103]. Small maps show individual surface ruptures at the same scale and ordered by rupture length (excluding 2018 Lake Muir).

Australia is regarded as a stable continental region [104] surrounded by passive margins with an intraplate stress field controlled by plate boundary forces [100,105] (Figure 1). This stress field has been extant throughout much of Australia since the late Miocene, broadly concurrent with a rearrangement of tectonic boundaries in India, New Zealand, New Guinea, and Timor [101]. More than 360 potentially neotectonic features (those showing displacements associated with, or since initiation of, the current stress-field conditions) [5,95] have been recognized in the landscape through field mapping, subsurface geophysical imaging, digital elevation modelling, and palaeoseismic investigation [2,3,5,34,95,106–113] (Figure 1).

Southeast Australia and the Flinders Ranges (Figure 1) have the highest rates of seismicity [101,102] and estimated neotectonic fault slip rates [3,109,111] yet all of the largest onshore historic Australian earthquakes have occurred in Archean to Proterozoic cratonic crust across the central and western parts of the country (Figure 1) [11]. Of four defined zones of high seismicity (Figure 1) [101,102], the South West Seismic Zone (SWSZ) is the only to coincide with historic surface ruptures (Meckering, Calingiri, Cadoux, Katanning, Lake Muir). Other ruptures (Marryat Creek, Tennant Creek, Pukatja, Petermann) have occurred in historically aseismic regions of the cratonic crust (Figure 1, Table 5, Section 3.2).


**Table 2.** Summary of data sources used in reviewing Australian historic surface ruptures.

#### **2. Review Data, Methods and Terminology**

Publications reviewed for ten of the eleven historic ruptures are provided in Table 1. At the time of writing, no publications are available for the most recent (eleventh) earthquake (8 November 2018 Mw 5.3 Lake Muir earthquake), although one is currently in review [79] and some imagery and data are available online (https://riskfrontiers.com/the-2018-lake-muir-earthquakes/, https://www.abc.net.au/ news/2018-11-09/earthquake-hits-lake-muir-western-australia/10480694 (accessed on 21 June 2019)). Available details for this event are included in Tables 1, 3 and 7 but it is otherwise not investigated in this paper. The Tennant Creek event comprises three mainshocks in a 12-hr period on the 22 January 1988, with three separate scarps recognized at the surface. Analysis of available seismological and

surface data supports a direct association between each mainshock and an individual rupture (TC1: Kunayungku; TC2: Lake Surprise west; TC3: Lake Surprise east) [57,59,62,69] and they are treated as separate events in this paper.

Relevant papers were identified by reading through either (a) reference lists of recent (2010–2018) publications or (b) the citation history of older publications using Google Scholar. In total N = 67 articles were identified as containing relevant primary data and interpretations for individual or multiple surface rupturing events (Table 1). A further 16 publications were identified containing relevant information on Australian seismicity (drawing on data from the primary publications) or compilations of previously published material (Table 1). Other sources of data used to complement analysis of primary published data are summarized in Table 2.

Epicenter locations and focal mechanisms were collated from primary literature and online databases (Table 2). Geoscience Australia (GA) maintain an online earthquake catalogue that is continuously updated and recently published a national earthquake catalogue (NSHA18) from 1840 to 2017 [11]. The NSHA18 catalogue contains revised magnitude values (Mw) for all surface rupturing events based on a comprehensive reanalysis [11,119], which are used in this study. Epicenters for surface rupturing events are generally located closer to the surface ruptures in the online database than the NSHA18 catalogue.

Published surface rupture maps were previously digitized into GA's publicly available Neotectonic Features Database [95]. In this paper, we sourced the original maps, georeferenced them, and digitized secondary fracturing that was left out of the GA database (Figures 3–10). Some ruptures were relocated up to 200 m from the locations in the Neotectonic Features Database based on infrastructure and visible surface rupture matched on high resolution satellite imagery (Table 2) and primary maps, to correct for datum transformation errors.

For the purposes of this paper we use the terms "surface rupture" and "scarp" to describe the primary zone along which hanging-wall and foot-wall offset is visible at the surface. Fracturing relates to secondary surface features which do not host significant displacement, associated with the primary rupture (e.g., cracking). "Fissures" describe significant extensional cracks often with non-seismic edge collapse extending their width. "Fault" is used to describe the seismologically defined plane of rupture, of which the surface rupture is the observable expression.

#### **3. Results**

Detailed summaries of the geology, seismology, surface rupture and palaeoseismology for the eleven considered historical surface ruptures from 1968 to 2016 are available as seven EarthArXiv reports ([120–126]). Figures and data in these reports include available geological maps, geophysical maps, borehole data, surface rupture maps, displacement data, and available palaeoseismic trench logs. In the process of reviewing available literature, a number of inconsistencies in data usage or reproduction were identified. These are summarized in Section 4.1 of this paper, with more detail available in the EarthArXiv reports. Below is a concise summary of the seven reports (the three Tennant Creek ruptures are contained within a single report) with key data presented in Tables 3–9 and Figures 3–10.

#### *3.1. Geology*

The Meckering, Calingiri, Cadoux, and Katanning events occurred in the Archean Yilgarn Craton within ~25 km of significant terrane boundaries (Figure 1). The Lake Muir event occurred in the Albany-Fraser Orogen, <15 km south of the south dipping terrane boundary with the Yilgarn Craton (Figure 1). The Marryat Creek, Pukatja and Petermann events occurred within the Mesoproterozoic Musgrave Block (Figure 1) within 0–10 km of major terrane boundaries. The Tennant Creek ruptures extend across the boundary of the Proterozoic Warramunga Province and Neoproterozoic–Cambrian Wiso Basin (Figure 1) (summary of all regional geology in Table 3, comprehensive details in EarthArXiv reports [120–126]).

**Table 3.** Summary of regional geology for each historic surface rupture.


Granitic gneiss, migmatite, mylonite, granulite, and/or amphibolite basement rock is observed in trenches or outcrop at <1 m depth at multiple locations along the Petermann (Figure 2), Pukatja, Marryat Creek (Figure 2), Cadoux and Meckering ruptures. Proterozoic basement in the vicinity of the Tennant Creek ruptures is variably overlain by 10 s to 100 s of meters of Phanerozoic basin bedrock. Structural measurements (foliations, intrusive boundaries) for bedrock outcrops within 5 km of surface ruptures are qualitatively well-aligned to surface ruptures in eight of ten events, though dip measurements are only qualitatively well aligned in three cases. This may relate to dip measurement difficulties for heavily weathered bedrock. (Summary of basement/bedrock in Table 4, comprehensive details in EarthArXiv reports [120–126]).

Nine of ten ruptures align with linear magnetic anomalies (Figure 2) and six align with linear gravity anomalies/gradients. The Katanning rupture does not align with either gravity or magnetics at the scale of available geophysical data (Figure 2, Table 2), and the Lake Muir rupture was not studied in this paper (paper in review: [79]). In cases where surface rupture traces are highly curved, arcuate, and/or segmented (Meckering, Marryat Creek, Tennant Creek, Pukatja), the distinctly-oriented rupture traces all align with distinct orientations of linear geophysical anomalies interpreted as faults, dikes, and lithological contacts (e.g., [33]).

**Figure 2.** Examples of the relationship between geophysical data and surface outcrop to historic ruptures (**a**) national total magnetic intensity map with ruptures overlaid, and dashed lines indicating linear anomalies (**b**) interpreted basement geology around the three Tennant Creek scarps (no basement outcrops at the surface) demonstrating strong correlation between intrusive/lithological boundaries, basement faults, and historic surface rupture (legend heavily simplified to show lithologies around the ruptures, more details in EarthArXiv report [126] legend and original map. Map used under creative commons NT Gov) (**c**) examples of surface outcrop structures visible in basement around the Marryat Creek rupture including three sets of dike/foliation/fault orientations coincident with the three major orientations of the historic rupture, uninterpreted satellite imagery insets (i) and (ii) available in Marryat Creek EarthArXiv report [123] (**d**) example of mylonite foliation orientation along a section of the Petermann rupture where outcrop occurs within the primary rupture zone.



fault (FT); fold (FD); vein (V); dike (D); lithological/batholith (L); geophysical (G)/surface (S) with specific examples detailed below: 1 Rupture along brecciated quartz in cutting / trench [25]. 2 Tank scarp ruptures through surface granite, and along a quartz vein in a hand-dug trench; bedrock within 10 m of the Kalajzic scarp aligned with rupture [41]. 3 Two trenches confirm rupture occurred along a pre-existing Proterozoic fault [118]. 4 Normal fault interpreted below Kunayungku scarp from ground-water boreholes and geophysical data prior to rupture (1981) [52,59]. 5 Lake Surprise west scarp runs along a 1–2 m high quartz ridge, associated with fluid movement along a bedrock fault [59,63,64]. Bedrock does not outcrop at the surface. 6 Rupture aligns with the projected boundary between a granite batholith and granulite facies gneiss; rupture curves around outcrop of granite batholith [9]. 7 Rupture abuts mylonite outcrop with the same strike and dip; mylonite with the same strike and dip outcrops within 1 m of the scarp in multiple locations [73].

#### *3.2. Seismology*

The sparse nature of the Australian National Seismograph Network (https://www.fdsn.org/ networks/detail/AU/) results in large (i.e., ≥5–10 km) uncertainties in earthquake epicenter and hypocenter location estimates that are difficult to quantify, including those for the earthquakes studied here [102]. Epicentral determinations (Figures 3–10) are typically not sufficiently accurate to unambiguously associate with surface ruptures. Six of ten ruptures have favored epicenter locations that are located on the rupture hanging-wall, within approximated positional uncertainty bounds.

Many publications do not state statistical uncertainties for their epicenter locations. Uncertainties listed in Table 5 include published uncertainties or an assigned value of ± 10 km where no uncertainties are available [102]. Epicenters with lower uncertainties are derived using a variety of relocation methods including extra analysis (e.g., InSAR slip distributions, joint hypocenter determination) or extra data (e.g., surface rupture location, aftershocks from temporary seismometer arrays) (comprehensive details in EarthArXiv reports [120–126]). The epistemic uncertainty relating to the quality of velocity models used to locate epicenters is unconstrainted but appears to be one of the major sources of inaccurate locations where instrumentation is particularly sparse. For instance, epicenters for Pukatja and Marryat Creek are located up to 17 and 30 km from the identified surface rupture respectively, showing large uncertainties still affecting remote earthquake locations between 1986–2012.

Hypocenters derived from mainshock instrumental data do not project onto rupture planes as defined by surface rupture for any of the studied events. Hypocentral depth estimates based on aftershock data and relocated epicenter locations suggest depths of <5 km (for Tennant Creek [59], Petermann [73] and Meckering [37]). Centroid moment tensor depths are <6 km depth, with the authors' preferred best-fits all <4 km depth (Meckering [26–28]; Cadoux [28]; Marryat Creek [28]; Tennant Creek [55]; Katanning [70]; Petermann [74]).

Epicentral location uncertainties limit the study of rupture propagation directions(s) for most events. Model scenarios for the Meckering earthquake support a bilateral rupture [37]. Unilateral upwards propagation has been proposed for the first Tennant Creek mainshock, complex propagation in the second mainshock, and unilateral upwards propagation to the Southeast in the third mainshock (all on separate faults) [57].

Seven of ten events show foreshock activity within six months and 50 km of the mainshock epicenter and six of ten show instrumentally recorded prior seismicity (more than five events within 10 years and 50 km). Precise locations are difficult to obtain due to epistemic and statistical uncertainties, particularly for assessing seismicity prior to 1980 due to sparse instrumentation [102]. Aftershock data are inherently incomplete for most events due to sparse instrumentation. However, temporary seismometers were deployed following most events and magnitude completeness from the national network is >3.0 Mw for all events [102] (though, the locations of these events are generally highly uncertain compared to the temporary arrays, as discussed above). The Musgrave block events (Marryat Creek, Pukatja, Petermann, Figure 1, Table 3) show less aftershock activity in comparison to the Tennant Creek and Western Australia earthquakes (Meckering, Calingiri, Cadoux) which had extended aftershock sequences [5,34].



ˆuncertainty4Earthquake within 6 months and 50 km of epicenter (affected by catalogue completeness for very remote events, see EarthArXiv reports ([120–126]) for details). 5 Earthquakes (n > 5) within 10 years and 50 km of the epicenter (affected by catalogue completeness for very remote events, see EarthArXiv reports for details). 6 Geometry of the seismogenic fault is unclear as scarps in the Cadoux rupture dip both east and west. 7 Waveform analysis of the second Tennant Creek mainshock show complicated rupture, potentially related to complex faultinteraction

#### *Geosciences* **2019**, *9*, 408

#### *3.3. Surface Ruptures*

Methods for the original mapping of individual ruptures are summarized in Table 6 and give some indication of data quality (explored in more detail in EarthArXiv reports [120–126]). Some readjustment of terminology and classification is required when considering the earlier ruptures (e.g., 'fault' may refer to both primary rupture and secondary fractures) and considerable detail of rupture morphology was lost between fine-scale (i.e., 1:500) and whole rupture (1:25,000–1:50,000) for pre-digital maps (Meckering to Tennant Creek). Six of ten ruptures are concave relative to the hanging-wall, three are straight and one is slightly convex (Petermann) (Table 6). All ruptures are reverse, and only two events have surface measurements consistent with secondary lateral movement (Meckering: dextral; Calingiri: sinistral; Table 6, explored in individual EarthArXiv reports [120–126]).

Nine of ten ruptures studied (Katanning was excluded due to lack of field mapping) show a relationship between surface sediments/bedrock depth to rupture morphology. Discrete rupture and duplexing rupture are more common where bedrock is close to the surface or surface sediments are predominately calcrete/ferricrete/silcrete. Where sands dominate in the surface sediments, rupture tends to present as warping and folding, or correspond with breaks in visible surface rupture (e.g., Petermann: morphology explored in individual EarthArXiv report [125])).

Figures 3–10 show digitized versions of published primary ruptures, secondary fracturing, and dip values measured at the surface. Primary sources inconsistently derive published length values to describe their mapped rupture (Tables 1 and 6; explored in detail in EarthArXiv reports [120–126]) which are then used in secondary sources including scaling relationships. This includes simplifying scarps to straight lengths (Calingiri, Cadoux, Marryat Creek), capturing along-rupture complexity to varying degrees (Pukatja, Tennant Creek), excluding segments that have length, offset and morphology characteristics of primary rupture (Meckering, Tennant Creek, Cadoux), and reporting InSAR derived lengths rather than visible rupture (Katanning). (Explored in more detail in individual EarthArXiv reports [120–126]).

Measurements of rupture length in the past have been inconsistent in their approach. Here, we re-classify mapped primary ruptures from original primary sources in order to generate a consistent rupture length dataset (Table 6). We simplify ruptures to straight lines and define new faults where mapped primary rupture has gaps/steps > 1 km and/or where strike changes by > 20◦ for distances > 1km [135]. The Splinter and Burges scarps (Meckering), Lake Surprise west foot-wall scarp (Tennant Creek), and individual Cadoux scarps were not included in original published lengths. These features show offsets, lengths, and locations consistent with primary slip along basement structures proximal to the main scarps, and therefore we include them in our length values.

Where InSAR is available (Katanning and Petermann) we present fault lengths described by both visible rupture and InSAR (Table 6). Visible rupture in the Petermann event was highly segmented due to ineffective rupture propagation through sand dunes up to six metres high [73]. Due to this we apply a slightly altered set of criteria to this event, faults are defined where strike of visible rupture and InSAR changes by > 20◦ and/or where steps in InSAR and visible rupture are > 1 km (Figure 10) [75,78].

Under these criteria seven of ten ruptures have more than one source fault defined (i.e., a multi-fault rupture). The total length of faulting is the same as published values for two events, increases by 2%–51% for four events relative to published length, and decreases by 4%–60% for three events (Tables 1 and 6). These lengths describe primary surface ruptures in a consistent way, accounting for all segments of rupture which show evidence of slip along basement structures. Our preferred length for each rupture, including uncertainties, is presented in Tables 1 and 6.



Where mapped primary rupture has a gap/step > 1 km and/or change in strike > 20◦across a length > 1 km (except where InSAR is available to across gaps > 1 km). Lengths of individual faults available in EarthArXiv reports [120–126]. 2 Vertical and lateral displacements digitized from

original publications. Net slip calculated for this study. 3 Original mapping method: Field work (FW); aerial photographs (A); surveying (levelling, cadastral or GPS) basic (SB), comprehensive (SC); InSAR (In); Drone (D); Satellite (SI). 4 Concave (CC) relative to hanging-wall, convex (CV) relative to hanging-wall, straight (minor deviations but overall straight shape). 5 Length weighted average across 0.5 km increments (where rupture length > 5 km) or 0.1 km increments (where rupture length < 5 km). 6 Profile shape based on Wesnousky (2008) [7] from visual fit (e.g., not best-fit regression curves): symmetrical (S); asymmetrical (AS); triangle (Tg); sine; average line (Avg). 7 Katanning visible surface rupture was observed, but no field mapping was conducted [70,71]. Original and subsequent publications describe Katanning length based on best-fit InSAR-derived source parameters (1.26 km) [70], rather than length of InSAR trace (2.5 km). Offset comes from field estimates (0.1 m) and fault modelling from InSAR data [70].

#### *Geosciences* **2019**, *9*, 408

**Figure 3.** 1968 Mw 6.6 Meckering earthquake (**a**) rupture and fracture map of Meckering and Splinter scarps [25] with faults labelled as per displacement graphs, trench location from [37] (**b**) published epicenter locations, open stars show approximate locations of epicenters without published coordinates (**c**) selected dip measurements of scarp and displacement of resurveyed road bench marks [25] (**d**) graphs of along-rupture vertical and lateral displacement measurements and net slip calculations [25] and net slip calculated from available data averaged over 0.5 km increments (this study) (**e**) focal mechanisms (red line shows preferred plane from original publication) from (i) Fitch et al., 1973, (ii) Fitch et al., 1993 & Leonard et al., 2002, (iii) Fredrich et al., 1988, and (iv) Vogfjord and Langston 1987.

**Figure 4.** 1970 Mw 5.0 Calingiri earthquake (**a**) rupture and fracture map of Calingiri [25] showing published epicenter locations and dip measurements of scarp [25], focal mechanism (red line shows preferred plane from original publication) from Fitch et al., 1973 (**b**) graph of along-rupture vertical and lateral displacement measurements [25] and net slip calculated from available data averaged over 0.1 km increments (this study).

**Figure 5.** 1979 Mw 6.1 Cadoux earthquake (**a**) rupture scarps and fracturing involved in the Cadoux rupture with named faults [41], focal mechanisms from (i) Denham et al., 1987 (ii) Fredrich et al., 1988 (iii) Everingham and Smith (unpublished, Lewis et al., 1981) (iv) CMT (**b**) available dip measurements, black where directly measured and grey were calculated based on available displacement measurements [41] (**c**) published epicenter locations (**d**) graph along-rupture of vertical and lateral displacement measurements and calculated net slip [41] and net slip calculated from available data averaged over 0.5 km increments (this study).

**Figure 6.** 1986 Mw 5.7 Marryat Creek earthquake (**a**) rupture and fracture map of Marryat Creek scarp and available dip measurements [48,118] with faults labelled as per displacement graphs, focal mechanisms (red line shows preferred plane from original publication) from (i) Fredrich et al., 1968, (ii) Barlow et al., 1986, (iii) CMT, trench location from [118], (**b**) published epicenter locations, and (**c**) graph of along-rupture vertical and lateral displacement measurements [48] and net slip calculated from available data averaged over 0.5 km increments (this study).

**Figure 7.** 1988 Mw 6.3 (TC1), 6.4 (TC2) and 6.6 (TC3) Tennant Creek earthquakes (**a**) rupture and cracking map of Kunayungku and Lake Surprise scarps with available dip measurements also the locations of trenches from [63] with faults labelled as per displacement graphs, focal mechanisms (red line shows preferred plane from original publication) from (i) McCaffrey 1989, (ii) Choy and Bowman 1990, (iii) Jones et al., 1991, (iv) CMT, (**b**) published epicenter locations of all three mainshocks (**c**) resurveyed benchmark offsets [63] uncertainties as discussed in text, and (**d**) graphs of along-rupture vertical and lateral displacement measurements [63] and net slip calculated from available data averaged over 0.5 km increments (this study).

**Figure 8.** 2008 Mw 4.7 Katanning earthquake (**a**) approximate visible rupture and InSAR trace (digitized from [70]), published epicenter locations and focal mechanism [70] (**b**) graph of along-rupture vertical and displacement taken from InSAR data [70] and net slip calculated from InSAR data (this study).

**Figure 9.** 2012 Mw 5.2 Pukatja/Ernabella earthquake (**a**) rupture and fracture map of Pukatja scarp and available dip measurements also the location of hand-dug trench [9], focal mechanisms as described in [9] from (i) Clark et al., 2014, (ii) GCMT, (iii) St Louis University; (**b**) graph of along-rupture vertical displacement measurements [9] and net slip calculated from available data averaged over 0.1 km increments (this study).

**Figure 10.** 2016 Mw 6.1 Petermann earthquake (**a**) rupture and fracture map of Petermann scarp [73] showing published epicenter locations and dip measurements of rupture (also the location of hand-dug trenches), focal mechanisms (i–iii) as described in [73], (i) USGS, (ii) GCMT, (iii) Geofon, and (iv) from Hejrani and Tkalcic 2018; (**b**) graph of along-rupture vertical displacement measurements and net slip calculated from available data averaged over 0.5 km increments.

Vertical and lateral offsets for all ruptures were digitized from primary literature (see EarthArXiv reports [120–126] for methods and uncertainties). New net-slip values were calculated for all ruptures from measured offsets, with dips assigned to each field offset measurement based on measured surface dips and/or focal mechanisms (dip measurements from primary literature shown on Figures 3–10 and in Table 6—preferred dip from this paper in Tables 1 and 6). Offset and net-slip data are presented in Figures 3–10 along with length weighted averages to reduce bias towards sections of scarp where high number of measurements were taken (generally where offset is higher). Average offset is between 42%–77% lower than the maximum offset for each rupture (Table 6). Displacement data are visually assigned a displacement profile shape [7] with six of ten ruptures best described by triangle profiles (2= symmetrical, 4= asymmetrical), two assigned an asymmetrical sine profile, and three best represented by a straight average profile (Table 6).

Offset data from resurveyed benchmarks (Tennant Creek [62]) and relevelling along infrastructure (Meckering [25]) were digitized from original publications to visualize distributed deformation across the rupture zone (Figures 1 and 7). No uncertainties are reported for the Meckering data [25], though they are likely in the order of Tennant Creek where original authors report uncertainties of ±9.3–25 cm [62,136]. Despite large uncertainties, the authors of both datasets believe offsets constrain fault geometries and show distributed hanging-wall uplifts (and to a lesser extent, foot-wall depression).

#### *3.4. Environmental Damage*

Environmental damage as described in primary literature or visible in published photos for each event were classified under the ESI-07 Scale [10] and summarized in Table 7 (comprehensive details in EarthArXiv reports [120–126]). Seven of eleven historic ruptures (excluding Katanning) can be described as an ESI IX – X despite having a wide range of lengths, magnitudes, and displacements.

Fracturing/cracking is reported for all historic surface ruptures, but generally only in the immediate vicinity of the surface rupture, captured by the rupture ESI value. This may relate to a lack of far-field mapping but is considered to be fairly representative of the true spatial distribution based on described mapping campaigns. The Meckering and Petermann events have the most aerially extensive fracturing, with areas of 580 and 210 km2 respectively. Of the total area covered by Meckering and Petermann fracturing, approximately 70% and 77% respectively is on the hanging-wall.

Where events occurred close to population centers (Meckering, Cadoux, Calingiri) ground water bores showed evidence of seismic fluctuation (no anomalies were identified in Tennant Creek bore data). The only observed liquefaction for any historic rupture comes from Meckering, where multiple sand blows were observed on the hanging-wall along the Mortlock River. Rockfalls are reported for three ruptures. Concentric or polygonal cracking was reported in the Meckering, Calingiri, Cadoux and Petermann events [25,41,73], and holes (possibly related to collapsible soils e.g., [137]) were reported along the rupture on the hanging-wall for Calingiri and Petermann [41,73]. It is possible that tree damage, hydrological effects, rock falls, polygonal cracking, or holes occurred for other ruptures than those listed but were not observed or described. Until the 2012 Pukatja event, field investigations immediately following the event were conducted by hard-rock geologists or seismologists not necessarily familiar with earthquake mapping techniques.

#### *3.5. Paleoseismology and Slip Rate*

In total, 14 trenches are described across the Meckering, Calingiri, Cadoux, Marryat Creek, Tennant Creek, Pukatja and Petermann ruptures (Table 8). Tennant Creek, Marryat Creek and Meckering are the only ruptures where detailed palaeoseismic work is published, including multiple trenches, luminescence dating, and soil descriptions and chemistry [37,63].


**7.**Summaryofenvironmentaleffectsdescribedforeachrupture,assignedESIlevel,andapproximateareaaround,ordistancefrom,

Treatingscarpsgivenscarp.Magnitudelargest ground-water levels recorded following earthquake. – No data or observations published. - Observations of damage outside of ESI-07 descriptions. - Damage noted as explicitlypresent. \* Evidence of damage but no detailed description.

 not



agessoil horizons described on foot-wall relative to hanging-wall [25]. 2 Fissure of potentially seismic origin filled with gravel, overlain by eolian sand; fracture through ferricrete overlain by gravel and eolian sand [63]. 3 Authors propose three possible origins: earthquake rupture bedrock offset; paleochannel along pre-existing bedrock structure; or combination of both; paleotopography greater than twice the height of historic slip [63].

\*

Of seven ruptures with detailed trench data (Table 8), five show evidence of no rupture since the lake Pleistocene (Meckering, Marryat Creek, Kunayungku, Lake Surprise east, Petermann). The only evidence of a pre-existing bedrock scarp exposed in any trench occurs in the second Lake Surprise west trench, and no clear evidence was found to support a seismic-offset origin for this topography (see EarthArXiv report [126] for more detail). Penultimate ruptures since 100 ka are possible for two of seven of these earthquake events, where sediments are estimated to be <100 ka in age, and where either no ferricrete/bedrock is exposed (Pukatja), or a bedrock scarp exists prior to overlying sedimentation (Lake Surprise west) (Table 8).

Maximum slip rates are calculated by applying minimum and maximum erosion rates for bedrock to determine the amount of slip (from average observed historic slip (Table 6)) that could have accumulated and been removed in the past million years. Minimum (0.3 m Myr−1) and maximum (5.7 m Myr<sup>−</sup>1) cosmogenic nuclide erosion rates from crystalline bedrock inselbergs across the Precambrian crust of central Australia (Figure 1) [138] are applied for ruptures where crystalline basement is exposed in trenches or observed at the surface within two meters of rupture (implying shallow bedrock). Where trenching exposed ferricrete or quartzite, cosmogenic nuclide erosion rates for quartzite exposed on flat bedrock summits in the Flinders Ranges are applied (5–10 m Myr<sup>−</sup>1) [109].

Applying erosion rates from inselbergs and quartzite bedrock summits to surface bedrock across the different cratonic and erosional environments in which ruptures occurred (e.g., Figure 1) introduces uncertainties that are unavoidable due to a lack of more appropriate erosion data. Based on a lack of evidence of any preceding ruptures for any of the historic events, including topographic or geomorphic, we prefer the minimum erosional estimates, giving maximum slip rates of 0.2–9.1 m Myr<sup>−</sup>1.


**Table 9.** Maximum slip rates based on minimum and maximum bedrock erosion rates [109,138] and length-weighted average net-slip values (Table 6).

\* Erosion rate for crystalline basement (CB) [138]; erosion rate for quartzite (Q) [109].

#### **4. Discussion-Lessons from the Last 50 Years of Australian Surface Ruptures**

#### *4.1. Inconsistancies in Data Use*

A number of inconsistent uses of data were found while reviewing papers that reference primary sources, as summarized below:


and 6). Data tables used in subsequent scaling relationships [6] describe the event as left-lateral based on one of three published focal mechanisms. We recommend a dominantly reverse mechanism for this event based on all available data.


#### *4.2. Surface rupture Bedrock Controls, Updated Datasets and Environmental Intensity*

Analysis of geology and reanalysis of mapped ruptures presented in this paper suggest that in four of the ten events studied (Meckering, Marryat Creek, Cadoux and Peterman) rupture propagated across 2–6 bedrock-controlled faults (e.g., pre-existing fractures and/or foliation planes and/or lithological boundaries and/or intrusive boundaries), and nine of ten events show strong basement controls on rupture location and orientation. Simplistic projection of surface defined faults using our preferred dips results in faults intersecting at depths that are consistent with published centroid depths (e.g., <4 km) in three of the four events with more than one fault defined (Petermann excluded). In all four cases, fault intersections project up-dip to the area of maximum vertical offset (in the case of Petermann, maximum dip occurs where the two faults overlap). It is uncertain with available seismic data whether hypocenters align with these projected fault intersections, and more data would be required to show that surface defined faults can be extended to depth along planar slip zones. However, linear geophysical anomalies in many cases show ruptures associated with basement conjugate fracture/dike orientations underlying rupture, suggesting strong control of the crustal architecture on intraplate earthquake nucleation and/or propagation.

New length, dip, and net-slip data presented for historic ruptures are derived by applying consistent framework and methodology. Past scaling relationships have included and excluded Australian surface rupturing events inconsistently, generally without clear explanation. They have also relied on vertical offset measurements as most of the original publications do not calculate net-slip. Length-weighted averages of net-slip values calculated in this paper are 32%–67% larger than those for vertical offset data, and maximum net-slip is 68%–89% higher than maximum vertical offset. This suggests that Australian events may be systematically misrepresented in past scaling relationships. Our new data, compiled by thorough analysis of available seismological and field data, and coupled with the recent revision of magnitude values [11], will facilitate more consistent integration of Australian events into earthquake catalogues and displacement-length scaling relationships.

In Table 10 we compare the calculated Mw, area and average displacement from SCR dip-slip scaling relationships of [148] using the surface rupture length used in developing the scaling relationships with the length from this paper. Table 10 then compares the difference between calculated average displacement and magnitude as derived using length of this paper and SCR dip-slip scaling relationship [148] with the average net slip derived from this paper, and update Mw values [11]. Results show differences of over 600% between scaling relationship average displacement and calculated average net slip values of this paper, and up to 18.7% difference in calculated Mw and updated values [11]. This highlights the need to investigate length, magnitude, and net-slip inputs of previous scaling relationships.

Most vertical displacement data for historic surface ruptures are collected as spot-height measurements of foot-wall elevation relative to hanging-wall elevation proximal to the surface trace. Less frequently, scarp perpendicular profiles are captured 5–50 m either side of the rupture. Satellite-based imaging of recent scarps (Petermann [75,77,78], Katanning [70], Lake Muir ([79])) shows permanent distributed displacement of the hanging-wall, and to a lesser degree of the foot-wall that is not captured by these spot-heights and short traverses. Specifically, InSAR imaging shows distributed deformation extending hundreds of metres to kilometres perpendicular to surface scarps [78], and extending along-strike for kilometres beyond the surface rupture detectable in the field [70,78,79]. This is particularly the case for smaller earthquakes (Katanning [70] (Figure 8) and Lake Muir (in review [79])), where the rupture ellipse only partially intersects the surface. Without these satellite derived deformation imaging techniques, the degree to which field observations and spot-height measurements along the visible surface rupture underestimate the length, height and width of surface deformation along a fault cannot be quantified.

Digitized offset data from resurveyed benchmarks across the Tennant Creek (Figure 7), Meckering (Figure 3) and Cadoux (EarthArXiv report [122]) ruptures provide evidence of distributed hanging-wall offset, though published uncertainties are on the order of measured offsets and data should be interpreted with caution. This data cannot be improved upon within the resolution of pre-deformation height data but suggests that the deformation envelope extends beyond the discrete surface rupture, and offset measurements as presented in Figures 3–10 may underestimate the true total vertical displacement values for each event. The ratio of distributed deformation to discrete deformation at a rupture tip might be expected to be larger for surface rupture segments that are more modest in vertical displacement, or cut through relatively more weathered regolith or thicker sedimentary deposits, as much of the initial deformation will be taken up as folding prior to the emergence of the fault tip [149].

This paper reviews primary literature to identify environmental earthquake effects (EEEs) for the purposes of applying the ESI-07 Scale [10,150] to Australian surface rupturing earthquakes. We find that the majority of environmental damage is observed in the immediate rupture zone, with the exception of rare rockfalls in prone-areas (e.g., road cuttings) at distances of ~200 km, and rare ground-water fluctuations up to 250 km away for some but not all events where ground water data was investigated. While this dataset likely does not capture the full range of potential ESI values and affected area due to sparse reporting of EEEs in the literature, it does provide a basis for comparing the maximum ESI and magnitude of reverse earthquakes in intraplate, low-topography, near-surface crystalline bedrock (in most cases), and generally arid settings against events in tectonically and geomorphically diverse regions (e.g., [151–159]).


average displacement and Mwusing length of this paper [148], and average net slip calculated in Table 6, and Mwof [11].

#### *Geosciences* **2019** , *9*, 408

#### *4.3. Recurrence of Historic Surface Ruptures and Implications for Hazard Modelling*

In the fifty years between 1968 and 2018, eleven moderate magnitude reverse earthquakes caused surface ruptures in cratonic Australian. Nine of the ten events analyzed show evidence of rupture along pre-existing structures with little to no evidence of prior Neotectonic movement. While this does not preclude the possibility that evidence of prior rupture was removed prior to the late Pleistocene, the lack of topographic or geomorphic evidence supporting repeated rupture suggests historic surface ruptures may have occurred on faults that could be considered previously inactive in the Neotectonic period (e.g., [4]).

It is unclear whether the historic surface rupturing faults have entered a period of activity and will host future Neotectonic earthquakes, have occurred as isolated events, or have such long recurrence intervals as to obscure all evidence of prior rupture. Paleoseismic work across the Precambrian SCR crust (Figure 1) has shown that faults in similar settings as the historic ruptures have hosted multiple Neotectonic earthquakes [2,34,108], with available dating indicating long recurrence (>30–70 ka [2]), and low topography indicating erosion may outpace seismic slip-rate. In contrast, paleoseismic investigations in the Phanerozoic non-extended crust of eastern Australia identify multiple faults with recurrence frequent enough to maintain topography [3,5,34,101,107,109], despite no historic surface rupturing or large earthquakes in this part of the continent.

Historic surface rupture kinematics are all consistent with SHmax (as measured from bore-hole breakouts, drilling induce fractures, and focal mechanisms [100]) either directly (e.g., a straight fault perpendicular to SHmax) or indirectly (e.g., rupture occurred along multiple faults, some of which are aligned oblique to SHmax, but uplift of the hanging-wall block is perpendicular to SHmax). The past fifty years of historic surface rupturing events show that in the Precambrian non-extended crust, basement with at least one set of linear structures aligned with SHmax, or multiple conjugate basement structures, could host a shallow moderate magnitude surface rupturing earthquake along one or multiple (in these cases, previously unrecognized and typically unrecognizable) faults. Eight of eleven surface rupturing earthquakes have occurred in areas of (or proximal to) preceding seismicity, while three (Petermann, Pukatja and Marryat Creek) occurred in areas with low historic seismicity, though instrument density limits the magnitude of completeness and location accuracy and precision of the historic earthquake catalogue in these locations. This suggests that spatially smoothed (distributed) seismicity models may provide the best utility for seismic hazard analyses in the central and western parts of Australia (e.g., [161]). This is also relevant for assessments of earthquake hazard in Precambrian intraplate crust elsewhere (e.g., Canada [162–164]). Further work is required to understand tentative correlations between seismogenic potential and large geophysical anomalies and/or Moho discontinuities (e.g., [165,166]), and whether transient local stress perturbations increase the potential for shallow seismicity (e.g., changes in pore-fluid pressure [76] or surface load variations [4]).

The historic earthquake catalogue for Australia is complete for ML > 5.5 since 1910, and ML > 5.0 since 1960 [102]. The magnitude values of historic earthquakes were recently revised [11]. This new catalogue contains seven MW > 5.5 on-shore earthquakes within the Precambrian non-extended crust that are not related to the historic surface rupturing events, and only one onshore event in the eastern Phanerozoic crust (Figure 1). The Precambrian crustal events include: four events (1941 Mw 5.6, 5.9, 6.5, and 1972 Mw 5.6) in the Simpson Desert NT [80,83,167], one event (1970 Mw 5.9) within the Lake Mackay WA sequence (20 events Mw 4.5–5.5 between 1970–1992) [23,81,83,167], one event 200 km south of Warburton WA (1975 Mw 5.6), and the 1941 Mw 6.8 Meeberrie WA event—Australia's largest recorded onshore earthquake (Figure 1). No surface ruptures have been identified for these events. While depths are poorly constrained due to poor instrumental density, estimates range from 7–33 km [11,83], deeper than the best estimates of depth for surface rupturing events (1–4 km for centroids, <6 km for hypocentral/base of fault depth). This suggests that moderate magnitude and potentially damaging earthquakes (e.g., Mw > 5.5) can be generated at depths of up to 33 km within the Precambrian non-extended crust, providing another source of hazard that cannot be effectively captured by active-fault catalogues in seismic hazard analysis.

#### **5. Conclusions**

We provide new length, dip, and net-slip data derived using a consistent framework and methodology in order to facilitate more consistent integration of Australian events into earthquake catalogues and displacement-length scaling relationships. Our reanalysis of primary data from 67 publications on ten of eleven historical surface rupturing earthquakes in Australia shows:


**Author Contributions:** Conceptualization, M.Q. and T.R.K.; methodology, T.R.K.; validation T.R.K. and D.C.; formal analysis, T.R.K.; investigation, T.R.K.; data curation T.R.K.; writing—original draft preparation, T.R.K.; writing—review and editing, T.R.K., M.Q. and D.C.; visualization, T.R.K. and M.Q.; supervision, M.Q.; project administration, T.R.K. and M.Q.; funding acquisition, M.Q.

**Funding:** This research was funded by the Australian Research Council through Discovery Grant #DP170103350. T. King received funding through the Australian Government Research Training Program Scholarship.

**Acknowledgments:** The authors thank the editors and three reviewers for comments that improved this work. We would like to acknowledge the Noongar people of south-west Western Australia, the Warumungu people of Tennant Creek, and the Antakirinja, Yankunytjatjara, and Pitjantjatjara people of the Western Desert and APY lands in South Australia/Northern Territory, as the traditional custodians of the land on which all historic surface ruptures occurred, and where the data described in this paper were collected. D. Clark publishes with the permission of the Chief Executive Officer of Geoscience Australia.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Review* **Post Seismic Catalog Incompleteness and Aftershock Forecasting**

**Eugenio Lippiello 1,\*, Alessandra Cirillo 1, Cataldo Godano 1, Elefetheria Papadimitriou <sup>2</sup> and Vassilis Karakostas <sup>2</sup>**


Received: 17 July 2019; Accepted: 6 August 2019; Published: 12 August 2019

**Abstract:** A growing interest appears among public authorities and society in accurate and nearly real time aftershock forecasting to manage and mitigate post-seismic risk. Existing methods for aftershock forecasting are strongly affected by the incompleteness of the instrumental datasets available soon after the main shock occurrence. The deficit of observed events, in the first part of aftershock sequences, can be naturally attributed to various mechanisms such as the inefficiency of the seismic network and the overlap of earthquake signals in seismic records. In this review, we show that short-term aftershock incompleteness can be explained only in terms of the second mechanism, whereas it is only weakly affected by the quality of the instrumental coverage. We then illustrate how standard models for earthquake forecasting can be modified to take into account this incompleteness. In particular, we focus on forecasting methods based on the data available in real time, in which many events are missing and the uncertainty in hypocenter location is considerable. We present retrospective tests that demonstrate the usefulness of these novel methods compared with traditional ones, which implement average values of parameters obtained from previous sequences.

**Keywords:** catalog incompleteness; seismic hazard

#### **1. Introduction**

Even if a still unanswered question is whether or not the accurate, reliable prediction of individual earthquakes is a realistic scientific goal, the possibility of forecasting future earthquakes exists. The two major examples concern the estimation of the occurrence probability of large shocks over a very long temporal interval (decades up to centuries) and the estimation of the aftershock occurrence rate after a large earthquake. Neither of the two cases is relevant in predicting the occurrence of an impending large earthquake but both examples provide very useful information on mitigating the impact of earthquakes that are likely to occur. As a matter of fact, the first example, usually defined as long-term (LT) seismic forecasting, is probably the most relevant from an engineering point of view, such as urban planning and building constructions: It allows one to address questions such as the maximum magnitude expected in a given area for the next years. Concerning the second example, usually defined as post-seismic Short-Term Aftershock (STA) forecasting, many events (the aftershocks) are always observed soon after the occurrence of a strong shock (the main shock). Aftershocks can attain sizes comparable to their triggering mainshock and can be very dangerous since they impact buildings already damaged by the previous shocks.

This review is focused on STA forecasting that can be potentially very efficient. Indeed the organization in time, space and energy of aftershocks follows well established empirical laws such as the Gutenberg–Richter (GR) and the Omori–Utsu (OU) law [1,2], which can be implemented in forecasting models. The GR law states that the magnitude distribution of earthquakes is an exponential function *P*(*m*) ∼ exp(−*βm*), and the OU law characterizes the power law decay of the aftershock rate as function of the time *t* since the main shock.

Even if the LT and STA forecasting act on two very different time scales, the two problems are intimately related. In the most simple description, seismic occurrence can be viewed as the superposition of two different stochastic processes: background seismicity responsible for mainshocks, which are the target of the LT forecasting, and aftershock occurrence, which is the target of STA. Hence, to achieve an accurate LT forecasting method a so-called declustering procedure is necessary, which allows one to isolate the two processes by means of a detailed knowledge of aftershock features. A clear example is the Epidemic Type Aftershock Sequence (ETAS) model introduced by Ogata [3] and probably representing nowadays the most popular model for STA as well as among the most efficient tools for LT forecasting. Studies of STA forecasting models, such as the ETAS model or more simple models implementing the OU law, have shown [4–14] that the incompleteness of datasets strongly affects the estimation of model parameters. This effect is more relevant in the first part of aftershock sequences when many earthquakes, in particular small ones, are not recorded and therefore not reported in seismic catalogs. This is mainly caused by the overlap of the signal of individual earthquakes in the seismic records. At the same time, incompleteness is also produced by the overload of processing facilities, due to a very large number of events in a narrow temporal window, and the damage caused by the mainshock to the seismic stations. Because of these difficulties, in many cases, operational probability forecasts only start more than 24 h after the mainshock [15].

In this review, we explore the problem of incompleteness of instrumental datasets focusing in particular on the so-called Short Term Aftershock Incompleteness (STAI). This is the main subject of Section 2. In Section 3, we review recent results on the influence of STAI on the estimation of parameters of STA forecasting models. Section 4 is then devoted to show that STAI is an intrinsic property of seismic catalogs which is not related to the efficiency of the seismic network. We conversely show that the main mechanism responsible for STAI is the overlap of aftershock coda waves with the waveforms of other events which obscure small aftershocks that occur close in time after larger ones. In Section 5, we show some approaches recently proposed to take explicitly into account this "obscuration" effect within the ETAS model. These approaches, however, are not simple to be implemented in real-time automatic procedures for aftershock forecasting. This is the topic of Section 6, which presents two different procedures developed to provide accurate STA forecasting, several minutes after the occurrence of a mainshock: the Omi et al. method [7,9,10] and the Lippiello et al. method [16,17]. The test of these two methods in retrospective studies is presented in Section 6 and final conclusions are drawn in the last section.

#### **2. Catalog Incompleteness**

Catalog completeness is usually quantified in terms of a magnitude threshold (or lower cut-off) *mc* defined as the magnitude above which all events are identified and included in the catalog. An accurate estimate of *mc* is fundamental in seismic forecasting. A too high value, discarding usable data, leads to loss information by under-sampling. Conversely, a too low value leads to an unreliable estimation of parameter values and thus to a biased analysis because of the incomplete dataset. A standard way of estimating *mc* is to find the minimum magnitude above which the best fit with the GR law is obtained. The value of *mc* clearly depends on the ability to filter noise and on the distance between the earthquake epicenter and the seismic stations necessary to trigger an event declaration in a catalog. Instrumental data from Taiwan seismicity, for example, give [18] at a given location*r*, *mc*(*r*)

$$m\_{\mathbb{C}}(\vec{r}) = 4.83d^{0.09} - 4.36. \tag{1}$$

where *d* = |*r* −*r*3| is the distance in kilometers between the epienter and the position*r*<sup>3</sup> of the third nearest seismic station. In Figure 1, we present the *mc* map for the Southern California obtained in [19] via the method of Amorése [20]. In particular, we observe a region in the central part of Southern California with a higher density of seismic stations, characterized by *mc* ≤ 1.4. This region, defined as Region 1, contains the 36% of *m* > 2.5 events recorded in the entire catalog. The remaining Southern California region (defined as Region 2) has a completeness magnitude starting from *mc* = 1.5 and becoming as large as *mc* 3 near the borders. A similar behavior is found if *mc* is evaluated according to the method of Schorlemmer and Woessner [21].

**Figure 1.** Magnitude completeness in Southern California. The value of *mc* can be obtained by the color bar and triangles identify the location of seismic stations. Green dashed lines define Region 1. Region 2 is the complement to Region 1 with respect to the entire Southern California (From [19]).

We stress that *mc* estimated from Equation (1) is a static quantity, controlled by the number of seismic stations, and we define it as "the static completeness magnitude". On the other hand, instrumental data show that the *mc* value, inside a given region, changes with time reaching much larger values in the first part of the aftershock sequence. As already anticipated in the Introduction, the dependence of the completeness magnitude *mc*(*t*) on the time *t* since the main shock occurrence is usually termed Short Term Aftershock Incompleteness (STAI). Results in [22–24] give a completeness magnitude *mc*(*t*) which depends logarithmically on the time *t* since the main shock

$$m\_{\rm c}(t, m\_M) = m\_M - \frac{1}{d} \left( \log\_{10} \left( \frac{t}{C\_0} \right) \right),\tag{2}$$

where *mM* is the main shock magnitude and *d* and *C*<sup>0</sup> are fitting parameters. We refer to Equation (2) as the Kagan–Helmstetter formula with the best fitting parameters *<sup>d</sup>* 1 and *<sup>C</sup>*<sup>0</sup> ∼ <sup>10</sup>−<sup>4</sup> days, when time is measured in days. In Figure 2, we plot the experimental aftershock magnitude distribution evaluated for different temporal intervals after the *m* = 7.3 Landers earthquake, in Southern California. Experimental results show a magnitude distribution with an about flat for values *m* < *mc*(*t*), whereas curves appear parallel on a semi-logarithmic scale for *m* > *mc*(*t*) consistently with a GR law with *b* 1. The crossover magnitude *mc*(*t*) is in agreement with Equation (2).

**Figure 2.** The number of aftershocks with magnitude larger than *mth* for the *mM* = 7.3 Landers earthquake in Southern California, evaluated in different temporal windows *δt* from the main shock. The green dashed line is the exponential behavior expected according to the GR law with *b* = 1.

In a different approach [5,7,9], STAI is taken into account by considering a magnitude distribution

$$P\_{\beta,\sigma}(m) \propto e^{-\beta m} \Phi\left(m|\mu(t), \sigma\right) \tag{3}$$

given by the GR law multiplied by the detection rate function Φ, which is represented by an error function

$$\Phi(m|\mu(t),\sigma) = \frac{1}{\sqrt{2\pi\sigma^2}} \int\_{-\infty}^{m} e^{-\frac{\left(x-\mu(t)\right)^2}{2\sigma^2}} d\mathbf{x}.\tag{4}$$

In the above equation, the function *μ*(*t*) represents the 50% detection magnitude and *σ* represents the range of the magnitude of partially detected earthquakes, i.e., at time *t*, only 50% of the events with *m* = *μ*(*t*) are expected to be detected whereas more the 98% of events are expected to be detected if *m* > *μ*(*t*) + 2*σ*. A reasonable definition therefore corresponds to assume *mc*(*t*) = *μ*(*t*) + 2*σ*. In particular, Ogata and Katsura [5] proposed that *μ*(*t*) obeys the law

$$
\mu(t) = \nu\_0 + \nu\_1 \exp\left(-\nu\_2 \left(3 + \log\_{10}(t)\right)^{\nu\_4}\right) \tag{5}
$$

where the *ν<sup>i</sup>* are fitting parameters. On the other hand, in a series of papers, Omi et al [7–10,15] developed an elegant method to obtain a non parametric fit of the function *μ*(*t*) and an estimate of *σ* from the occurrence times and magnitudes of all recorded events in a giving learning period.

In Figure 3, we plot the results by Omi et al. [9] for *μ*(*t*) and *μ*(*t*) + 2*σ* for three aftershock sequences in Japan. These results are compared with the Ogata–Hirata formula (Equation (5)) and the Kagan–Helmstetter formula (Equation (2)). Figure 3 shows that the Omi and the Ogata–Hirata models give similar behavior for *μ*(*t*) and are able to capture the time variation of the detection rate. In contrast with these two models, since the parameters of the Kagan–Helmstetter formula are fixed for all sequences, it cannot reproduce the diverse recovering dynamics of the completeness magnitude that considerably depends on each aftershock sequence. The comparison of the forecasting skill of these three methods, for 38 Japan aftershock sequences, shows that the Omi method performs slightly better than the Ogata–Hirata methods and much better than a Kagan–Helmstetter formula [9].

**Figure 3.** Examples of the estimated time-varying 50% detection rate *μ*(*t*) (solid curve) of magnitudes and estimated various time-varying completeness magnitudes (dotted curve) as indicated in the inset, superimposed on the magnitude-time plot of the observed aftershocks during the first day of the main shock. From [9].

#### **3. The Influence of STAI on Model Parameters**

For a complete dataset, one expects that the rate of aftershocks *ρ*(*t*, *mM*, *mth*) with magnitude larger than a threshold value *mth* occurring after a time *t* following a mainshock of magnitude *mM* can be obtained by combining the GR law and the OU law

$$
\rho(t, m\_{M\nu} m\_{th}) = \frac{K}{(t+c)^p} e^{-\beta m\_{th}}.\tag{6}
$$

According to the productivity law [25], *K* depends on the main shock magnitude and Equation (6) can be written as

$$
\rho(t, m\_{M\prime} m\_{th}) = \frac{\mathbb{K}\_0 e^{a m\_{M\prime} - \hbar m\_{th}}}{(t+c)^p}.\tag{7}
$$

As already observed in [2], missing small events in the early stage of the aftershock sequence causes the instability of the estimate of the parameters *K*0, *α*, *β*, *c*, *p* in Equation (6). A problem which becomes particularly relevant at the beginning of aftershock sequences when the completeness magnitude after large earthquakes can temporarily increase by several units [4,22,26,27]. For this reason, long and short term forecasts usually present some corrections which take into account STAI [6,28,29].

Incompleteness, in particular, can make the *c*-value measured from instrumental catalogs *cmeas* much larger the "true" *c*-value in the OU law (Equation (6)). Indeed, restricting to aftershocks with magnitudes larger than a reference value *mth*, if events with magnitudes *m* < *mc*(*t*) are not recorded, the measured *c*-value can be obtained from Equation (2) after setting *mc*(*cmeas*) = *mth*, which leads to

$$\mathcal{L}\_{\text{meas}} = \mathbb{C}\_0 10^{d(m\_M - m\_{th})}.\tag{8}$$

It is evident that this quantity depends on the parameters of Equation (2) but is not related to the *c*-value of the OU law. Alternatively, an estimate of *cmeas* can be obtained from Equation (5) after setting *μ*(*cmeas*) = *mth* − 2*σ*. As a consequence, the incompleteness at short times hides the true value of *c* that in turn introduces a strong bias in the evaluation of the parameters *K*<sup>0</sup> and *α* in Equation (6), strongly affecting routines for short term aftershock forecasting at time *t* < *cmeas*.

#### *3.1. The Influence of STAI on the ETAS Parameters*

As anticipated in the Introduction, the ETAS model is probably, nowadays, the most popular one for STA forecasting. The assumptions of the ETAS model include: (1) yhe background seismicity is a stationary Poisson process that depends on the position *x*, *μ*(*x*); (2) every event, whether it is a background or a triggered one by a previous event, triggers its own off-spring independently; (3) the expected number of direct off-springs is an exponential function of the magnitude of the mother event (productivity law); and (4) the time lags between triggered events and the mother event follow the OU law. According to these assumptions, the occurrence rate of events with magnitudes *m* ≥ *m*<sup>0</sup> at the position *x* at time *t* is given by

$$\Lambda\_{\text{ETAS}}(m,\vec{x},t) \quad = \left[ \sum\_{i=1}^{N} Q\left( |\vec{x}\_i - \vec{x}|, t - t\_i, m\_i \right) + \mu(\vec{x}) \right] \beta e^{-\beta(m - m\_0)} \tag{9}$$

where the sum extends over all events with magnitude *mi*, epicentral coordinate *xi* and occurrence time *ti* < *t* and

$$Q(\Delta r\_{i\prime}, t - t\_{i\prime}, m\_i) \quad = \frac{K\_0(p-1)}{c} e^{a(m\_i - m\_0)} \left(1 + \frac{t - t\_i}{c}\right)^{-p} G(\Delta r\_{i\prime}, m\_i) \tag{10}$$

with Δ*ri* = |*xi* −*x*|, which is the epicentral distance. The function *G*(Δ*ri*, *mi*) is a spatial kernel that explicitly depends on the triggering magnitude *mi* and *μ*(*x*) is the time independent contribution due to background seismicity.

The influence of STAI on the estimates of the ETAS parameter was addressed by Zhuang et al. [13] in the case of the 15 April 2016, Kumamoto earthquake sequence in Japan. Under the assumption that earthquake magnitudes are independent of their occurrence times, Zhuang et al. [13] replenished the short-term missing data of small earthquakes by using a bi-scale transformation. They then compared the maximum likelihood estimate of the ETAS parameters of the recorded dataset in the JMA catalog with the replenished one, considering only events above a lower magnitude threshold *mth* = *mc*. Results plotted in Figure 4, as function of *mc*, show that, when the magnitude threshold *mc* ≥ 3, which is approximately the static completeness magnitude of the JMA catalog, the estimated ETAS parameters are about the same for both datasets. Conversely, important differences are found for values of *mc* < 3. For the replenished dataset, the estimated background rate *μ*(*x*) decreases roughly exponentially when the cut-off magnitude is increased, consistently to what is expected according to the GR law ( Figure 4a). The original dataset, conversely, exhibits a flatter behavior, indicating the absence of small magnitude events. Concerning the other parameters, the most striking feature is that in the replenished dataset all parameters only weakly depend on *mc*, as expected, whereas we observe a non-trivial dependence on *mc* in the JMA catalog.

The results of Zhuang et al. [13] indicate that the estimate of ETAS parameters from the original dataset, when one considers a lower magnitude threshold *mc* < 3, leads to non-correct results. A similar conclusion was reached by Seif et al. [14] who studied how the ETAS parameters, obtained by the iterative approach of Zhuang et al. [30], depends on the lower magnitude threshold *mth*. In particular, Seif et al. [14] investigated two simulated ETAS catalogs: a complete one which implements the ETAS parameters estimated from the Southern California catalog and an incomplete one where aftershocks of mainshocks with *mM* > 5 were removed if their magnitude was smaller than *mc*(*t*) given in Equation (2). Results plotted in Figure 5 show that for sufficiently larger values of *mth*, the parameter inversion procedure does not give the true values of *K*<sup>0</sup> and *p* used to generate synthetic catalogs. Seif et al. [14] attributed the observed discrepancy to the fact that aftershocks triggered by events with *m* < *mth* are erroneously identified as direct aftershocks of some previous larger earthquake. This widens the distribution of direct aftershocks leading to a smaller *p*-value. At the same time, because of the anticorrelation between *K*<sup>0</sup> and *p*, *K*<sup>0</sup> is overestimated. Figure 5, in particular, shows a striking difference between the estimated parameters in the complete and the incomplete catalogs. However, this difference tends to disappear for increasing *mth* indicating that the influence of aftershock incompleteness is not significant for *mth* -3.5.

**Figure 4.** Different panels correspond to the ETAS parameters *μ*, *K*, *A* = *K* <sup>∞</sup> <sup>0</sup> (*<sup>t</sup>* <sup>+</sup> *<sup>c</sup>*)−*pdt*, *<sup>c</sup>*, *<sup>α</sup>*, *<sup>p</sup>* (see axis labels) estimated from the Kumamoto aftershock sequence with different magnitude thresholds. The red and black dots are the estimates based on the original and the replenished datasets, respectively. Unit of measures are *day*<sup>−</sup>1, *dayp*−1, *day* for *μ*, *K*, *c* respectively and the other quantities are adimensional except *A* = *K* <sup>∞</sup> <sup>0</sup> (*<sup>t</sup>* <sup>+</sup> *<sup>c</sup>*)−*pdt* which represents the productivity from an event of magnitude *mc*. From [13].

Results of Figures 4 and 5 indicate that using a lower magnitude threshold *mth* below the completeness level, especially for some parameters, can lead to incorrect prediction. Unfortunately, it is not simple to establish a strict correspondence between the degree of incompleteness of the catalog and the error expected in the estimate of parameters.

**Figure 5.** The ETAS parameters are plotted against *mth* for synthetic catalogs simulated with parameters from Southern California (gray) and compared with the parameter for the incomplete synthetic catalog (orange). The "true" parameter values are plotted with black dashed lines the grey shadowed region represents the 95% quantiles of 30 synthetic ETAS catalogs. The orange shadowed region represents the 95% quantiles of 30 synthetic ETAS incomplete catalogs. From [13].

#### *3.2. Is STAI Related to the Static mc?*

As explained in Section 2 the static *mc* is a local quantity which depends on the local density of the seismic network *ρS*, as illustrated by Equation (1) and Figure 1. The influence of the density *ρ<sup>S</sup>* on the STAI was addressed by de Arcangelis et al. [31] by investigating the *cmeas*-value in the two sub-regions of Southern California illustrated in Figure 1. As already explained in Section 2, the inner region (Region 1) comprises a high value of *ρ<sup>S</sup>* and a static *mc* ≤ 1.4. Conversely, a small *ρ<sup>S</sup>* is present in the external region (Region 2) and the static *mc* > 1.5, with values of *mc* 3 close to the borders. To obtain an estimate of *cmeas* in each sub-region, de Arcangelis et al. [31] measured the aftershock daily rate *ρ*(*t*, *mM*, *mth*) defined as the number of aftershocks with magnitude larger than *mth* occurring at a temporal distance *t* after their triggering main shock with magnitude *m* ∈ [*mM*, *mM* + 1), divided by the number of mainshocks with magnitude *m* ∈ [*mM*, *mM* + 1). Three different values of *mM* = (3, 4, 5) and *mth* = (1.5, 2.5, 3.5) were considered. In this study, mainshock–aftershock couples were identified according to the Baiesi–Paczusky (BP) declustering criterion [32–34] using the same parameters adopted by Moradpour et al. [35] and Hainzl [12]. In particular, only aftershocks identified as direct descendants of the mainshock were included in the analysis.

The results (Figure 6) show that the aftershock rate clearly depends on the magnitude difference *mM* − *mth* in both Region 1 and Region 2. In particular, de Arcangelis et al. [31] divided time by *τ* = 10*d*(*mM*−*mth*) obtaining that data for different values of *mM* and *mth*, inside each sub-region, exhibits the scaling collapse *ρ*(*t*, *mM*, *mth*) = *F*(*t*/*τ*) (Figure 7a). It is evident from Figure 7a that the Omori decay *ρ* ∼ *t* <sup>−</sup>*<sup>p</sup>* sets in when *t*/*τ* becomes larger than a given value *x*0, different between the two regions. Since the *cmeas* can be obtained from the time such that the Omori decay *ρ* ∼ *t* <sup>−</sup>*<sup>p</sup>* sets in, Figure 7a gives *cmeas* = *x*0*τ* and one recovers Equation (8) after the identification *x*<sup>0</sup> = *C*0. In particular the best fit gives log10(*C*0) = −3.53 ± 0.05 and *d* = 1 ± 0.03 inside Region 1 and log10(*C*0) = −3.70 ± 0.05 and *d* = 0.95 ± 0.03 inside Region 2. This leads to a counterintuitive behavior with a *cmeas*-value being larger inside Region 1 even if the static *mc* is significantly smaller inside Region 1 than in Region 2. Conversely a smaller *cmeas*-value is found in Region 2 when the static *mc* is larger. This result clearly indicates that *cmeas* is not related to *ρ<sup>S</sup>* and that STAI cannot be reduced by increasing the density of the seismic station thus suggesting that STAI originates from a different mechanism (see next section). The same conclusion can be also obtained from the measurement of the correlation between magnitude according to the method proposed in [19,36–38]. This analysis [19,31] has shown significantly larger magnitude correlations in Region 1 than in Region 2.

**Figure 6.** The number of events identified as aftershocks by the BP declustering procedure with magnitude larger than *mth*, which occurred at a temporal distance *t* from events identified as mainshocks with magnitude *m* ∈ [*mM*, *mM* + 1), is divided by the number of identified mainshocks and plotted versus *t*. Different panels correspond to different values of the mainshock magnitude class *m* ∈ [*mM*, *mM* + 1). Different colors correspond to results for different geographic regions: Region 1 (open green symbols) and Region 2 (filled magenta symbols). Different symbols indicate different values of the lower threshold: *mth* = 1.5 (circles), *mth* = 2.5 (squares) and *mth* = 3.5 (diamonds).

**Figure 7.** (Color online) (**a**) The same data in Figure 6 are plotted as function of *t*/*τ*, with *τ* = 10*d*(*mM*−*mth* ) proportional to *cmeas* (Equation (8)) with *d* = 1, for different values of *mM* and *mth*. Filled (empty) colored symbols are used for data of Region 1 (Region 2). The magenta continuous line is the scaling function *F*(*x*) = *A* log 1 + *Bx*−*p* with *A* = 0.35, *B* = 70 and *p* = 1.1, whereas the dashed green line is the scaling function *F*(*x*) = *A*(*x*/*B* + 1)−*<sup>p</sup>* with *A* = 300, *B* = 7 and *p* = 1.1. (**b**) The aftershock density *ρ*(*t*, *mM*, *mth*) in the ETASI1 catalog, with a blind time Δ*t* = 1 min, is plotted as a function of *t*/*τ*. Different values of *mM* and *mth* are plotted with different symbols: stars for *mM* − *mth* = 2.5, crosses for *mM* − *mth* = 1.5 and plus for *mM* − *mth* = 0.5. Different colors correspond to different values of *K*<sup>0</sup> and of the average background rate *rB*: *K*<sup>0</sup> = 0.035 and *rB* = 4.38 days−<sup>1</sup> (black), *K*<sup>0</sup> = 0.035 and *rB* = 8.3 days−<sup>1</sup> (green) and *K*<sup>0</sup> = 0.068 and *rB* = 4.38 days−<sup>1</sup> (red). Magenta continuous and green dashed lines are the same scaling functions *F*(*x*) plotted in (**a**). (Inset) The value of Δ*m* (Equation (8)) as function of *log*10(*Ko*) for the ETASI model with a blind time Δ*t* = 1 min (black crosses). The cyan line is the theoretical prediction (Equation (18)). From [31].

#### **4. The Origin of STAI and the Envelope Function**

Results of the previous Section (Section 3.2) suggest that STAI is an intrinsic property of seismic catalogues not related to density of the seismic stations. This conclusion is strongly supported by the study of the envelope function *μe*(*t*) after several mainshocks that occurred in Greece and Italy in the last ten years [16]. More precisely, the envelope function *μe*(*t*) is obtained from the ground velocity recorded during the first days after the mainshock. The signal of each component is filtered by means of a two-pass Butterworth filter in the range [1, 10] Hz, the envelope of each signal is computed and the signals of the three components are superimposed. *μe*(*t*) is finally defined as the logarithm of the resulting signal. This quantity was introduced by Peng et al. [26] to identify aftershocks not reported in the JMA catalog during the first minutes after the main shock. The idea is that the occurrence of an aftershock must produce a double peak in *μe*(*t*) corresponding to the coupled pair of P and S arrivals. The local magnitude of the event is given by *m μmax* + *const*, where *μmax* is the maximum in *μ<sup>e</sup>* and the constant depends on the epicentral distance from the recording station, related to the S-P time difference.

Considering the evolution of *μe*(*t*) after a mainshock, occurred at the time *t*0, Lippiello et al. [16] found that the envelope function never goes below a given value *μmin*(*t*) which is a logarithmic decreasing function of time (Figure 8)

$$
\mu\_{\min}(t) = \mu\_M - \phi \log(t - t\_0) - \Delta \mu\_{\min}.\tag{11}
$$

As a consequence, even very accurate analyses of post seismic waveforms, even those which employ sophisticated matched filter detection algorithms [39,40], do not allow one to identify small events which produce peaks smaller than *μmin*(*t*). This reflects a completeness magnitude *mc*(*t*) that depends on the time after the mainshock with a functional dependence similar to *μmin*(*t*) and, therefore, small events cannot found and catalogs are intrinsically incomplete.

To understand the mechanism responsible for the existence of *μmin*(*t*), a closer inspection of the envelope function *μ*(*t*) after all mainshocks reveals the existence of two characteristic times: *τ* and *tM*. The first time *τ* is of the order of some seconds, whereas *tM* is of order of some minutes, and three distinct regimes are observed:


$$
\mu\_{\mathfrak{c}}(t) \simeq \mu\_M - q \log(t - t\_0). \tag{12}
$$

• For *t* − *t*<sup>0</sup> > *tM*, the average value of the envelope *μe*(*t*) is still logarithmic but with different coefficients:

$$<\langle \mu\_c(t) \rangle = \mu\_M - \phi \log(t - t\_0) - \Delta \mu\_\prime \tag{13}$$

with *φ* < *q*.

The same three regimes have been found for other mainshocks in Southern California and in Italy [16]. The first two regimes can be easily associated to the mainshock waveform, which can be modeled as *μe*(*t* − *t*0) = *μ<sup>M</sup>* + log[*g*(*t* − *t*0)], where *g*(*t* − *t*0) is the mainshock envelope waveform. Experimental results suggest an initial linear increase of *g*(*t*) [41] followed by a fast decay consistent with an exponential function *<sup>g</sup>*(*t*) ∼ exp(−*Q*−1*t*) [42]. Figure <sup>8</sup> indicates that in the intermediate regime *τ* < *t* − *t*<sup>0</sup> < *tM*, with *tM* of the order of few minutes, the envelope waveform is more consistent with a power law decay as proposed by Lee et al. [43]. Under these assumptions, the behavior of *g*(*t*) up to the time *<sup>t</sup>* − *<sup>t</sup>*<sup>0</sup> < *tM* can be modeled as *<sup>g</sup>*(*t*) ∼ *<sup>t</sup>*(*t*/*<sup>τ</sup>* + <sup>1</sup>)−1−*<sup>q</sup>* with the time *<sup>τ</sup>* representing the typical duration of the mainshock signal, leading to

$$
\mu\_t(t) = \mu\_M + \log(t - t\_0) - (q + 1)\log\left((t - t\_0)/\tau + 1\right). \tag{14}
$$

The existence of the third regime, previously enlightened by Sawazaki and Enescu [44], can be interpreted taking into account that not only the main shock but each aftershock of magnitude *mi*, occurred at time *ti*, produces a signal following the relation *μe*(*t*) = *μ<sup>i</sup>* + log[*g*(*t* − *ti*)] and one therefore expects a theoretical envelope of the form

$$\mu\_{th}(t) = \log \left\{ \max\_{t\_i < t} \left[ 10^{\mu\_i} g(t - t\_i) \right] \right\},\tag{15}$$

where the maximum must be evaluated for all aftershocks with occurrence times *ti* < *t*.

#### *Numerical Generation of the Envelope Function*

To verify that Equation (15) reproduces the experimental findings, Lippiello et al. [16] started from a mainshock with magnitude *mM* occurring at time *t*<sup>0</sup> and assumed that the aftershock rate follows the OU law (Equation (6)). Since *p*-values usually have small fluctuations among different aftershock sequences [45], Lippiello et al. [16] assumed a fixed value of *p* (*p* = 1.1) and after choosing different values of *K* and *c*, they generated an aftershock sequence according to Equation (6) for a temporal window of three days. To each aftershock is then associated a magnitude randomly extracted from the GR law. After fitting the value of *τ* from the experimental *μe*(*t*), the key assumption is that a magnitude *mi* aftershock, occurring at time *ti*, generates a seismic signal with envelope *<sup>A</sup>*(*t*) = <sup>10</sup>*mi <sup>g</sup>*(*<sup>t</sup>* − *ti*) with *g*(*t*) = *t*(*t*/*τ* + 1)−1−*<sup>q</sup>* and *q* = 2.5. The synthetic *μth*(*t*) is then obtained from Equation (15) and a vertical shift is finally applied in order to have the mainshock peak in *μth*(*t*) equal to the experimental *μM*. The numerical parameters *K*, *c*, implemented in the OU law (Equation (6)) are then tuned in order to reach a good agreement between *μth*(*t*) and the experimental *μe*(*t*), according to the procedure described in Section 6.2. Results of *μth*(*t*) plotted as orange lines in Figure 8 show that it is possible to generate a synthetic envelope reproducing the experimental one in all the three regimes. The above results indicate that since each aftershock produces its own coda waves which decay as a power law with exponent *q*, the overlap of coda waves generated by subsequent aftershocks causes the existence of a lower signal *μmin*(*t*) which decays as a power law with an exponent *φ* < *q* (Equation (13)). The same agreement between *μe*(*t*) and *μth*(*t*) is recovered for other mainshocks *mM* > 6 recorded in Greece, Italy and Southern California [16].

We wish to stress that the mainshock peak *μM*, as well as aftershock peaks *μ<sup>i</sup>* in Equation (15), strongly depends on the distance of the recording station from the mainshock epicenter and on site effects. In addition, the functional form of *g*(*t*) can be different at different stations. As a consequence both *μe*(*t*) and *μth*(*t*) are different at different stations but, under the hypothesis that aftershocks occur not too far from the mainshock hypocenter, the values of *K* and *c* providing the best agreement between *μe*(*t*) and *μth*(*t*) should be the same for all stations.

**Figure 8.** The quantity *μe*(*t*) (green circles) after the the Hector Mine earthquake in California recorded at the station CIGSC located at a distance of 92 km from the main shock epicenter. The magenta crosses indicate the (logarithmically binned with bin value 0.1) average value of *μ*(*t*), the red continuous lines represent the results of the logarithmic fit (Equation (13)) for *t* − *t*<sup>0</sup> > *tM*. The dashed blue lines represent the quantity *μmin*(*t*) and orange lines are used for results of numerical simulations for the theoretical envelope *μth*(*t*), defined in Equation (15). The values of the best-fitting parameters in Equation (6) are *K* = 0.95, *c* = 0.18 days and *τ* = 8 s.

#### **5. The ETASI Model**

In the previous section, we have shown that STAI is mostly due to the overlap among aftershock coda waves. This ingredient can be incorporated in the ETAS model by multiplying the ETAS occurrence rate Λ*ETAS* in Equation (9) by a detection function

$$
\Lambda\_{\text{ETASI}}\left(\vec{\mathbf{x}},t,m|\vec{\mathbf{x}}\_{i},t\_{i},m\_{i}\right) = \Lambda\_{\text{ETAS}}\left(\vec{\mathbf{x}},t,m|\vec{\mathbf{x}}\_{i},t\_{i},m\_{i}\right) \times \Phi(m\_{\star}t,\mu(t)|m\_{i},t\_{i}).\tag{16}
$$

The detection rate can be still described by an error function as in Equation (4) and we define the model described by Equation (16) as the ETAS Incomplete (ETASI) model. The main difference with Equation (3) is that in this approach the detection function Φ(*m*, *t*, *μ*(*t*)|*mi*, *ti*) depends on the history of all previous earthquakes {*mi*, *ti*}*<sup>N</sup> <sup>i</sup>*=1, with *ti* < *t*. More precisely, in Equation (3), the 50% detection function *μ*(*t*) depends only on the time and magnitude of the main shock whereas in Equation (16) each event can obscure the recording of subsequent earthquakes.

We observe that the ETASI model differs form the procedure adopted by Seif et al. [14] who generated incomplete ETAS catalogs by removing only aftershocks of mainshock with *m* > 5. In the ETASI model, conversely, any event can obscure subsequent earthquakes independently of its magnitude.

The simplest choice for the detection function Φ(*m*, *t*|*mi*, *ti*) was proposed by Hainzl [12] and corresponds to an error function with *σ* → 0 and *μ*(*t*) = *mi* if *t* − *ti* ≤ Δ*t*, whereas *μ*(*t*) = 0 for *t* − *ti* > Δ*t*, where Δ*t* is a constant blind time. This corresponds to the hypothesis that each earthquake hides all subsequent smaller events occurring at temporal distances smaller than Δ*t*. Notwithstanding the simplicity of this functional form of *μ*(*t*), as already proposed by Hainzl [12], this model, defined as ETASI1 in the following, leads to non-trivial temporal patterns of the aftershock occurrence.

The hypothesis of a constant blind time allows one to achieve an analytical evaluation of *cmeas* [12]. Indeed, the blind time Δ*t* also represents the minimum temporal distance between two subsequent earthquakes reported in a catalog and this leads to a maximum detectable rate *ρmax* 1/Δ*t*. As a consequence, since the "true" aftershock rate is a decreasing function of the time *t* after the mainshock occurrence (Equation (6)), the measured *ρ*(*t*, *mM*, *mth*) corresponds to the "true" aftershock rate only if *ρ*(*t*, *mM*, *mth*) < *ρmax*, a condition which is always fulfilled at large times. Conversely, at small times, when the "true" aftershock rate is larger than *ρmax*, the measured *ρ* exhibits a constant behavior *ρ*(*t*, *mM*, *mth*) *ρmax*. Accordingly, the *cmeas*-value can be identified as the time such as *ρ*(*cmeas*, *mM*, *mth*) = *ρmax*, and assuming *α b* Equation (7) gives

$$\rho \left( c\_{m\text{cas},\prime} m\_{M\prime} m\_{th} \right) = \frac{K\_0 e^{b\left(m\_M - m\_{th}\right)}}{(c\_{m\text{cas}} + c)^p} = \rho\_{max} \tag{17}$$

giving *cmeas* = *c* + (*K*0/*ρmax*) 1/*<sup>p</sup>* exp (*b*/*p*)(*mM* <sup>−</sup> *mth*), which for *<sup>c</sup> cmeas* coincides with Equation (8):

$$\mathbb{C}\_0 = \Delta m = \left(\frac{K\_0}{\rho\_{\max}}\right)^{1/p},\tag{18}$$

and *d* = *b*/*p*.

The ETASI1 model can be implemented numerically via a two step process. At the first step, standard ETAS catalogs are simulated and, at the second step, all events that occurred at a temporal distance smaller than Δ*t* after a larger event are removed from the catalog. de Arcangelis et al. [31] implemented different values of *K*<sup>0</sup> and analyzed the ETASI1 catalog by the same BP declustering procedure applied to the instrumental catalog. As in Figure 6, the aftershock daily rate *ρ*(*t*, *mM*, *mth*) for the ETASI1 catalog has been evaluated for different mainshock magnitudes *mM*, different thresholds *mth* and different *K*<sup>0</sup> values. This study has shown that the *cmeas*-value follows Equation (8) with *d* = *b*/*p* as illustrated in Figure 7b where *ρ*(*t*, *mM*, *mth*) is plotted as a function of *t*/*τ* with *τ* = 10*d*(*mM*−*mth*) and *d* = *b*/*p*. Data for different *mM* and *mth* and the same *K*<sup>0</sup> collapse onto the same master curve *F*(*t*/*τ*), as for the instrumental catalog (Figure 7a). Concerning the value of *C*0, de Arcangelis et al. [31] observed that the larger the value of *K*<sup>0</sup> implemented in ETAS simulations the larger was the value of *C*<sup>0</sup> fitted from the decay of *ρ*(*t*, *mM*, *mth*). Results plotted in the inset of Figure 7b show that −Δ*m* = log10 *C*0, becomes more positive for increasing *K*<sup>0</sup> confirming the strong correlation between *C*<sup>0</sup> and *K*0. In particular, we observe that the dependence of *C*<sup>0</sup> on *K*<sup>0</sup> is consistent with Equation (18) only for small values of *K*0. Deviations from Equation (18) can be attributed to the cascading process implemented in the ETAS model. Indeed, aftershocks of higher order generation are also followed by a blind time which eventually hides aftershocks of previous generations. This causes a larger total blind time compared to the situation when higher order generation aftershocks are not considered, as in Equation (18).

The comparison between the data collapse observed for the ETASI1 catalog (Figure 7b) with the one observed for the instrumental Southern California catalog (Figure 7a) suggests that the larger value *cmeas* inside Region 1 must be attributed to a larger productivity (larger *K*0) of that region. This is in agreement with the behavior of *<sup>ρ</sup>* (Figure 6) for times *<sup>t</sup>* > *cmeas* when the "true" OU decay *<sup>ρ</sup>* ∼ *<sup>K</sup>*/*t<sup>p</sup>* is expected. Indeed, it is evident that, when *t* > *cmeas*, *ρ* in Region 1 is systematically larger than in Region 2.

We further observe that the scaling function *F*(*x*) presents clear deviations from the OU prediction *F*(*x*) ∝ (*x* + 1)−*<sup>p</sup>* in the intermediate temporal regime. We attribute these deviations to the cascading process which can produce a more gradual decrease of the aftershock number from the initial plateau compared to the situation when higher order generation aftershocks are not taken into account [12]. A better fit for *F*(*x*) in numerical and instrumental catalogs is provided by *F*(*x*) = *A* log (1 + *Bx*−*p*) obtained by Lippiello et al. [46] under a dynamical scaling assumption [38,45,47–51].

#### *5.1. ETASI2*

A more refined expression for *μ*(*t*) within the ETASI model (Equation (16)) is proposed in [31] and corresponds to the so called ETASI2 model. The idea is that the 50% detection function follows the same decay of the envelope function of a single earthquake and according to Equation (12) this corresponds to the assumption that

$$\mu(t) = \max\_{i: t\_i < t} \left( m\_i - q \log(t - t\_i) - \delta\_0 \right), \tag{19}$$

where the maximum is evaluated over all events with magnitude *mi* occurred at time *ti* < *t*. The model is numerically implemented in [31] taking for the detection rate function Φ an error function as in Equation (4) with *σ* → 0. This corresponds to the two-step procedure illustrated in the previous section with the removal from the original ETAS catalog of all events with magnitude *m* and occurrence time *t* such that *m* < *μ*(*t*). A finite value of *σ* is considered in [52].

In de Arcangelis et al. [31], the coefficient *q* in Equation (19) is taken as a model parameter and its value has been tuned in order to achieve the best agreement between the organization of aftershocks in ETASI2 and instrumental catalogs. This study showed that the ETASI2 model provides a more accurate description of aftershock occurrence, with respect to the ETASI1 model, and in particular it better captures the correlation between subsequent magnitudes observed in instrumental catalogs. In particular the agreement between instrumental and ETASI2 catalogs is obtained by setting a *K*<sup>0</sup> value, in the ETASI2 simulations, significantly larger inside Region 1 of Southern California (Figure 1) than Region 2. As a consequence, de Arcangelis et al. [31] proposed that the value of *K*<sup>0</sup> which provides the best overlap between ETASI2 and instrumental catalogs can be interpreted as the best estimate for the true productivity coefficient *K*<sup>0</sup> in each region.

#### *5.2. Dynamical Scaling ETAS Model*

A model alternative to the ETASI has been proposed on the basis of a dynamical scaling relation between time and energy [19,36,38,46,47]. Within this hypothesis, different from the general assumption of the ETAS model [3,53,54], time and magnitude are not independent quantities but the magnitude difference fixes a characteristic time scale for aftershock rate relaxation. Deviations from the GR law are a natural consequence of this assumption with a completeness magnitude depending on time in agreement with what is observed in experimental data (Equation (2)). The study of the maximum likelihood [51] has shown that this method provides a more accurate description of the aftershock rate decay than the ETAS model.

#### **6. Automatic Procedures for Short-Term Aftershock Forecasting**

In this section, we present two methods which have been developed in order to provide real-time aftershock forecasting: The Omi method [7,9,10] and the Lippiello method [16,17]. The idea of both methods is to extrapolate the parameters of the OU law, or more generally of the ETAS model, by means of an automatic procedure which uses the information available up to a time *T*<sup>2</sup> after the mainshock. The ETASI model, presented in the previous section, is not suitable for this purpose because it is not possible to apply a maximum likelihood estimation procedure to invert parameters. For the likelihood evaluation, indeed, one should have access to "obscured" events, an information by definition unavailable. In this section, we review the retrospective tests performed with the Omi and the Lippiello methods. Both tests consider the forecasting according to the OU law (Equation (6)) implementing the parameters *K* and *c* estimated for each individual mainshock sequences, according to the information available in real time. This forecasting is compared to a generic model where the parameters *K* and *c* are taken as average values over many sequences. The results show that the novel methods outperform the generic model.

#### *6.1. The Omi Method*

The Omi method, briefly illustrated in Section 2, has been implemented in a real-time system for automatic aftershock forecasting in Japan. A systematic test of the efficiency of the Omi method, using real-time seismic data, was performed by Omi et al. [10] on aftershock sequences of seven inland mainshocks with magnitudes *m* ≥ 7 that occurred after the establishment of the Hi-net observation system. The Omi method is based on the evaluation of the parameters *K*, *p*, *c* in Equation (6) using the information from an incomplete dataset, including only the recorded aftershocks. More precisely, Omi et al. [10] considered data in the learning period from two instrumental catalogs: the Hi-net and JMA catalogs. The results of this method are compared to a standard forecasting approach which uses fixed parameter values (the generic model) determined based on many aftershock sequences in Japan. The forecast from the generic model depends only on the main shock magnitude. The performance is compared by means of the log-likelihood ratio score, which is referred to as information gain *I*. The standard error *SI* of the information gain is also numerically evaluated and, under a Gaussian approximation, one forecast performs better than the other one, with a probability larger than the 95%, if *I* > 1.64*SI*. More precisely, Omi et al. [10] considered four learning periods corresponding to the first 3, 6, 12, and 24 h periods of aftershock data to prepare forecasts for the following 3, 6, 12, and 24 h testing periods, respectively. The results of the test, for the seven Japan aftershock sequences, are visually represented in Figure 9 that shows the information gain per aftershock, considering separately data from the Hi-net and JMA catalog, against the generic aftershock model. The error bars correspond to 1.64*SI* and, therefore, if their lower bound is greater than zero, the Omi model performs better than the generic model. Omi et al. [10] separately considered two target magnitudes, the smallest one *Mt* = *Mc* (Figure 9a) and *Mt* = 3.95 (Figure 9b). Results show that, for the entire forecast period of 3–48 h, both the Hi-net and JMA forecasts significantly outperform the generic model and that the same result is valid in all individual forecast periods for the case of the lowest target magnitude *Mt* = *Mc*. Conversely, for *Mt* = 3.95, because of the small number of *m* > *Mt* aftershocks, the scores tend to have large error bars and, even if the Omi method generally outperforms the generic model, this is not statistically significant for most cases (Figure 9b).

Another interesting item in Figure 9 is the comparison of the performance of the Omi method implementing the JMA catalog against the one implementing the Hi-net automatic catalogs. In general, the results show that the JMA forecast significantly outperforms the Hi-net forecast in the case of the small target magnitude *Mt* = *Mc*, probably because of the better accuracy of the JMA catalog. On the other hand, the two performances are comparable for *Mt* = 3.95 indicating that, even if the automatic Hi-net catalog is less accurate than the JMA catalog, it provides reasonable results for target magnitudes *Mt* ≥ 3.95. This is an important result since it is the only catalog available in real time.

**Figure 9.** Information gain per aftershock of the forecasts based on the Hi-net and JMA catalogs, respectively, relative to the generic model for the cases with: (**a**) *Mt* = *Mc*; and (**b**) *Mt* = 3.95. If the lower bound of the error bar is greater than 0, the forecast is significantly better than the generic model with a probability larger than the 95%. From Omi et al. [10].

#### *6.2. The Lippiello Method*

Lippiello et al. [16] proposed a method based on the results presented in Section 4, which show that the instrumental envelope *μe*(*t*) can be reproduced by the theoretical envelope *μth*(*t*) given in Equation (15). In particular, *μth*(*t*) can be tuned to recover Equation (13) with the same parameters *φ* and Δ*μ* of the instrumental *μe*(*t*). The central observation is that the value of the coefficients *φ* and Δ*μ* which describe the logarithmic decay of *μth*(*t*) (Equation (13)) depend on the parameters *K* and *c* of the OU law (Equation (6)), implemented in the numerical simulation. This idea has been applied in a procedure which associates the best-fitting parameters (*φ*, Δ*μ*) in Equation (13), obtained from the experimental signal, to the pair (*K*, *c*) used in numerical simulations of the OU law. The procedure is schematically illustrated in Figure 10. Firstly one evaluates the value of *τ* which is the best approximation for *μe*(*t*) in Equation (14) during the first 60 s. Fixing *p* = 1.1, the estimated value of *τ* is used to generate many numerical signals *μth*(*t*) for different choices of *K* and *c* according to Equation (15). Then, one compares, in the learning period *t* − *t*<sup>0</sup> ∈ [*T*1, *T*2], the average value of the numerical signal *μth*(*t*) with the experimental one *μe*(*t*). The slope *φ* of *μth*(*t*) depends fundamentally on the *c*-value, whereas *K* controls its vertical shift Δ*μ*. As a consequence, after choosing a given *K*-value, one varies the *c*-value until the slopes of *μth*(*t*) become similar to the experimental *μe*(*t*) (Figure 10a). The *c*-value producing this effect is then defined as *c* and one generates different numerical catalogs with *c* = *c* and different values of *K* (Figure 10b). The value of *K* minimizing the difference between *μe*(*t*) and *μth*(*t*) in the interval [*T*1, *T*2] is defined as *K*. The pair of values (*K*, *c*) is considered the best representation of experimental data and is used to forecast aftershock occurrence at times *t* − *t*<sup>0</sup> > *T*2, according to Equation (6).

**Figure 10.** (**Left**) Black dotted-lines represent the envelope function *μe*(*t*) of the Lixouri earthquake in Greece recorded at the station LKD2 located at 70 km from the mainshock epicenter. Colored dot-dashed lines are used for *μth*(*t*) with *τ* = 11 s, *K* = 1.15 and different values of *c* ranging in the interval [0.01, 4.5] days. Black circles represent the logarithmic fit (Equation (15)) in the interval [10, 160] min of the experimental envelope function, whereas continuous lines are used for the best fit of the numerical *μth*(*t*), with *c* increasing from 0.01 to 4.5 days from top to bottom. (**Right**) The same as in the left panel but plotting numerical data *μth*(*t*) with *τ* = 11 s, *c* = 1.15 days and different values of *K* ∈ [0.75, 2.95]. Continuous lines are the the logarithmic fits of numerical data, in [10, 160] min, with *K* increasing from bottom to top. From Reference [16].

Test of the Procedure

To test their method, Lippiello et al. [16] considered as target aftershocks all the events producing in the envelope function *μe*(*t*) a peak with amplitude larger than *μ*<sup>0</sup> = *μ<sup>M</sup>* − 3 and define as *N*3(*t*) their cumulative number in the temporal interval [*T*2, *t* − *t*0], after the mainshock. Similarly, the number *N*2(*t*) is the cumulative number of events producing peaks larger than *μ*<sup>0</sup> = *μ<sup>M</sup>* − 2.

The quantity *N*2(*t*) and *N*3(*t*) are plotted in Figure 11 for three mainshocks from three different geographic regions, for times *t* > 160 min. Lippiello et al. [16] compared the instrumental number of *N*3(*t*) and *N*2(*t*) with those expected according to the OU law (Equation (6)) after implementing the best values of *K* and *c* (*K* and *c*) obtained according to the Lippiello procedure. More precisely, Lippiello et al. [16] considered a learning period [*T*1, *T*2] min with *T*<sup>1</sup> = 10 min and different values of *T*2. They found that for values of *T*<sup>2</sup> - 160 min the estimate of *K* and *c* became quite stable. Therefore they consider *T*<sup>2</sup> = 160 min and found that at all times *t* > *T*<sup>2</sup> the Lippiello method predicts with reasonable accuracy the number of occurred aftershocks. Differences between predicted and observed aftershock number are typically smaller than 20% and always within the error bars. For comparison, in the same Figure 11, Lippiello et al. [16] also plotted the expected number *N*3(*t*) and *N*2(*t*) according to a generic model which implements in the OU law Equation (7) the value of *K*0, *c* and *α* obtained as average over all sequences with *mM* > 5, recorded in Southern California [55]. We observe that the number of the predicted strong aftershocks according to this generic model is much smaller (approximately ten times) than the observed one. We wish to stress that the estimate of *K* and *c*, for each specific sequence, on the basis of the earthquakes recorded in the official catalogs up to the time *T*<sup>2</sup> leads to unreliable results. As an example, in the case of the Lixouri earthquake only three earthquakes are reported in the Greek catalog in the first thirty minutes after the mainshock. The situation is a little better after the L'Aquila and Hector mine earthquake when 20 events are reported in regional catalogs in the first thirty minutes. These numbers are too small to produce a reasonable estimate of *K* and *c*, which, in all cases, would be very biased because of the incompleteness of datasets as confirmed by the absence of earthquakes with magnitude smaller than *m* = 3, in official catalogs in the first thirty minutes.

Summarizing, results of Figure 11 clearly show that the Lippiello method performs much better than the generic model providing a reasonable aftershock forecasting. Very recently, Lippiello et al. [17] proposed a more efficient procedure, still based on the agreement between *μth*(*t*) and *μe*(*t*), which produces even more accurate STA forecasting.

**Figure 11.** The quantities *N*3(*t*) (**top**) and *N*2(*t*) (**bottom**) are plotted for *t* − *t*<sup>0</sup> > *T*<sup>2</sup> = 160 min as green circles for the three main-aftershock sequences of Figure 1: the 26 January 2014 *m* = 6.1 Lixouri earthquake (**left**), the 16 October 1999 *m* = 7.1 Hector Mine earthquake (**middle**) and the 6 April 2009 *m* = 5.9 L'Aquila earthquake (**right**). Red squares are the expected values according to Equation (6) using the average values obtained in Reference [55]. The orange curves are the expected values using in Equation (6) the best parameters *K* and *c* inverted from the experimental fit of *τ*, *φ* and Δ*μ*. The error bars in each plot incorporate both the uncertainty in the estimate of (Δ*μ*, *φ*) and fluctuations in the aftershock number for given values of *K* and *c*. From Reference [16].

#### *6.3. Comparison between the Omi and the Lippiello Methods*

The Omi method needs as an earlier stage an automatic routine for real-time automatic detection. The only assumption is that the GR law holds up to the lower considered magnitude *mc*, that is a widely accepted idea within the seismological community. Conversely, the key assumption of the Lippiello method is that the theoretical envelope (Equation (15)) reproduces the instrumental one *μe*(*t*). This hypothesis is less consolidated but allows one to evaluate seismic hazard directly from the envelope function *μe*(*t*) without any information on occurrence times, magnitude and locations of earthquakes producing the observed signal. Overcoming all problems related to event identification and location, the Lippiello method presents some advantages:


Summarizing, the two methods appear as two complementary approaches to the same problem and can be simultaneously adopted.

We finally remark that in the OMI method the spatial dependence of the aftershock occurrence probability can be easily included by multiplying the OU law (Equation (6)) for a decreasing function of the distance from the mainshock epicenter. However, the parameters controlling this decay are very difficult to be inverted from the on-going specific sequence and average quantities must be considered. On the other hand, in the Lippiello method the spatially dependence is, at least in part, implicitly considered. Indeed, the method does not give in output the probability to have a given magnitude aftershock but the occurrence probability of events which produce peaks in the envelope *μe*(*t*), in the position where the station is located, larger than a reference value *μ*0. This probability, therefore, clearly depends on the distance of the station from the mainshock epicenter.

#### **7. Conclusions**

In this review article, we show that, in the first part of aftershock sequence, incompleteness is an intrinsic property of seismic data. Indeed, the overlap of seismic signals makes the envelope function always greater than *μmin*(*t*). This lower threshold *μmin*(*t*) can be related to the minimum aftershock magnitude *mmin*(*t*) identifiable at time *t* since the main shock and indicates that it is feasible to obtain more accurate catalogs but it is impossible to reach completeness levels below *mmin*(*t*). This result also provides an explanation for the dependence of *mc*(*t*) on the time elapsed from the main shock occurrence. We illustrate how the incompleteness affects the estimate of the parameters of STA forecasting models and we present some models which take it explicitly into account. In particular, we present an interpretation of the mechanisms responsible for the existence of *μmin*(*t*) in terms of the overlap of coda-waves generated by each individual aftershock: The combination of the decay of the aftershock rate (OU law) with the power law relaxation of coda waves produces an envelope function *μe*(*t*), which, on average, depends logarithmically on the time since the main shock. We illustrate the bias induced in the estimate of model parameters because of the incompleteness of the instrumental catalog. A deeper investigation is necessary to establish a quantitative relationship between the expected error in the estimate of model parameters and the degree of incompleteness of the catalog.

We also show that the parameters of the logarithmic dependence of *μe*(*t*) appear strictly related to the parameters of the OU. We then describe a procedure based on this observation and developed in [16] to extract the OU law parameters from a fitting procedure applied to the experimental *μe*(*t*). This approach overcomes all problems related to event identification and location since seismic hazard is evaluated directly from the envelope function *μe*(*t*) without any information on occurrence times, magnitudes and locations of earthquakes producing the observed signal.

We also illustrate the Omi method [7,9,10,15] proposed to overcome the problems of STA forecasting caused by the incompleteness of instrumental data. We show that the method, based on the detection rate function, provides reliable aftershock forecasting on the basis of incomplete instrumental catalogs.

Summarizing, we review very recent proposals to develop real-time systems for automatic aftershock forecasting. The above procedures have been up to now tested retrospectively but appear already suitable to be implemented in prospective tests. These methods apply the OU law or the ETAS model without taking into account the spatial variability of seismicity. Future developments should correspond to space-time models providing a space dependent forecasting, particularly useful in aftershock sequences with a complex spatial distribution.

**Funding:** This research received no external funding.

**Acknowledgments:** E.P. and V.K. acknowledge support of this work by the project HELPOS, Hellenic System for Lithosphere Monitoring (MIS 5002697), which is implemented under the Action Reinforcement of the Research and Innovation Infrastructure, funded by the Operational Programme "Competitiveness, Entrepreneurship and Innovation" (NSRF 2014–2020) and co-financed by Greece and the European Union (European Regional Development Fund). Geophysics Department Contribution 000/2019.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*
