Next Article in Journal
Soft Set Decision and Cluster Percolation Method-Based Policy Clustering and Encryption Optimization for CP-ABE
Next Article in Special Issue
Asymptotic Form of the Covariance Matrix of Likelihood-Based Estimator in Multidimensional Linear System Model for the Case of Infinity Number of Nuisance Parameters
Previous Article in Journal
Toward Optimal Robot Machining Considering the Workpiece Surface Geometry in a Task-Oriented Approach
Previous Article in Special Issue
Validation of Stock Price Prediction Models in the Conditions of Financial Crisis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Block Bootstrap Testing for Persistence Change in Infinite Variance Observations

1
School of Sciences, Xi’an University of Science and Technology, Xi’an 710054, China
2
School of Computer Science and Technology, Xi’an University of Science and Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(2), 258; https://doi.org/10.3390/math12020258
Submission received: 6 December 2023 / Revised: 9 January 2024 / Accepted: 10 January 2024 / Published: 12 January 2024
(This article belongs to the Special Issue Probability, Statistics and Random Processes)

Abstract

:
This paper investigates the properties of the change in persistence detection for observations with infinite variance. The innovations are assumed to be in the domain of attraction of a stable law with index κ ( 0 , 2 ] . We provide a new test statistic and show that its asymptotic distribution under the null hypothesis of non-stationary I ( 1 ) series is a functional of a stable process. When the change point in persistence is not known, the consistency is always given under the alternative, either from stationary I ( 0 ) to non-stationary I ( 1 ) or vice versa. The proposed tests can be used to identify the direction of change and do not over-reject against constant I ( 0 ) series, even in relatively small samples. Furthermore, we also consider the change point estimator which is consistent and the asymptotic behavior of the test statistic in the case of near-integrated time series. A block bootstrap method is suggested to determine critical values because the null asymptotic distribution contains the unknown tail index, which results in critical values depending on it. The simulation study demonstrates that the block bootstrap-based test is robust against change in persistence for heavy-tailed series with infinite variance. Finally, we apply our methods to the two series of the US inflation rate and USD/CNY exchange rate, and find significant evidence for persistence changes, respectively, from I ( 0 ) to I ( 1 ) and from I ( 1 ) to I ( 0 ) .

1. Introduction

There is growing evidence to show that the parameters of autoregressive models fitted to many economic, financial and hydrology time series are not fixed across time; see, Busetti [1], Chen [2] and Belaire [3]. Being able to correctly characterize a time series into its separate stationary I ( 0 ) and non-stationary I ( 1 ) components has important implications for effective model building and accurate forecasting in economics and finance, especially concerning government budget deficits and inflation rates (e.g., Sibbertsen [4]). A number of testing procedures were proposed depending on whether the initial regime is I ( 1 ) or I ( 0 ) . Kim [5] proposed regression-based ratio tests of the constant I ( 0 ) null against the alternative of a single change in persistence, either from I ( 0 ) to I ( 1 ) or vice versa. Leybourne [6] discussed testing for the null hypothesis that the series is I ( 1 ) throughout the sample versus the alternative that it switches from I ( 1 ) to I ( 0 ) and vice versa. When the direction of change is unknown, Leybourne [7] considered standardized cumulative sums of squared subsample residuals that are used to identify the direction of change and do not spuriously reject when the series is a constant I ( 0 ) process. Since then, there have been further studies on the persistence change. For example, Cerqueti [8] presented panel stationary tests against changes in persistence, and Kejriwal [9] provided a bootstrap test for multiple persistence changes in a heteroskedastic time series. For more research on persistence change, we refer to Jin [10], Jin [11], Wingert [12], and Grote [13].
The above tests are designed to detect a change in persistence with finite variance and do not consider heavy-tailed series with infinite variance. However, Mittnik [14] argued that many types of data on economics and finance have a heavy tail character. It is of greater practical significance to test for the change point with heavy-tailed observations. Therefore, many scholars have paid more attention to the detection of change in persistence under heavy-tailed time series models. In the case of the I ( 0 ) process null hypothesis, Han [15] used the ratio test statistic to consider the change-point detection with heavy-tailed innovations and proved its consistency as there is a persistence change presence. For the problem of the null hypothesis of the I ( 1 ) process and the alternative hypothesis involving a change in persistence switching from I ( 1 ) to I ( 0 ) , Qin [16] applied a Dickey–Fuller-type ratio test statistic to study for infinite variance observations. Related to online monitoring issues, Chen [17] adopted a kernel-weighted quasi-variance test to monitor the persistence change in heavy-tailed series. For more persistence change estimation and detection in heavy-tailed sequences with infinite variance, see Yang [18], Jin [19], Jin [20], and Wang [21].
However, the existing test methods on heavy-tailed sequences consider the tail index just belonging to κ ( 1 , 2 ] , without κ ( 0 , 1 ] , and suppose beforehand that the direction of persistence change is known. In this paper, we propose a new test statistic to test the null hypothesis I ( 1 ) against the alternative of a change in persistence, either I ( 0 ) I ( 1 ) or I ( 1 ) I ( 0 ) . The innovations are assumed to be stationary time series with heavy-tailed univariate marginal distributions, which are in the domain of attraction of κ -stable law with κ ( 0 , 2 ] . We take into account two types of time series models, a pure AR(1) model and a AR(1) model with a constant or a constant plus linear time trend. Recently, the asymptotic inference for a least squares (LS) estimate when the autoregressive parameter is close to 1 (i.e., the series is nearly non-stationary) has been receiving considerable attention in the statistics and econometric literature, such as Chan [22] and Cheng [23]. Therefore, we are also interested in deriving the asymptotic behavior of the proposed test in the context of the near-unit root. Since the critical values of asymptotic distribution depend on the unknown tail index κ . To solve this problem, a block bootstrap approximation method suggested by Paparoditis [24] is used to determine the critical values, and then its validity is proved. The performance of the bootstrap-based test in small samples is evaluated via an extensive Monte Carlo study. Finally, the feasibility of our proposed test procedures is illustrated through empirical analysis.
Although the main results of this paper bear some formal analogy with Leybourne [7], it offers a number of important new implications. First, it extends the work of persistence change detection with heavy-tailed sequences to the case wherein the innovation process is in the domain of attraction of a stable law with index κ ( 0 , 2 ] . Thus, we can perform the examination of structural change in persistence even if the mean of real data does not exist. Second, under the circumstances of infinite variance, both for non-stationary and nearly non-stationary series, the order of ( ρ ^ T ρ T ) is T. This is somewhat intriguing as this order was originally motivated by the consideration of the magnitude of the observed Fisher’s information number in the finite variance case, as can be seen in Chan [25]. Third, as we all know, Kim [5] proved the ratio type test statistic diverges at the rate of O p ( T 2 ) , but it only can apply to the alternative hypothesis involving a persistence change, I ( 0 ) I ( 1 ) . The Dickey–Fuller type ratio test proposed by Leybourne [26] correctly rejects the null of no persistence change, and the tail in which the rejection occurs can also be used to identify the direction of change. The deficiency cannot be ignored since the divergence rate is less than O p ( T ) . However, it is satisfying that the divergence rate of our proposed test statistic can reach O p ( T 2 ) . In addition, we do not need to assume that the direction of any possible change is known and the test almost never rejects in the left (right) tail when there is a change from I ( 1 ) to I ( 0 ) [ I ( 0 ) I ( 1 ) ].
This paper is organized as follows. Section 2 introduces the model, some necessary assumptions, and the test statistic. Section 3 details the large sample properties of the tests under both the constant persistence model of the non-stationary I ( 1 ) and the persistence change models. Moreover, the asymptotic distributions under I ( 0 ) and the near-unit root are given and a consistent change-point proportion estimators under an alternative hypothesis are also established. The algorithm of the block bootstrap is presented in Section 4. Monte Carlo simulations are presented in Section 5 to assess the validity of our proposed test procedures in finite samples. Section 6 applies our procedures to two time-series data. We conclude the paper in Section 7. All proofs of the theoretical results are gathered in Appendix A.

2. Materials and Methods

As a model for a change in persistence, we adopt the following data-generating process (DGP):
y t = ρ t y t 1 + η t , t = 1 , 2 , , T , η t = λ 1 η t 1 + λ 2 η t 2 + + λ p η t p + ξ t .
where λ i , i = 1 , 2 , , p are unknown parameters and p is an integer greater than zero. { η t } is a p-order autoregressive sequence, and innovation { ξ t } lies in the domain of attraction of a stable law that is taken to satisfy the following, quite general, dependent process assumption.
Assumption 1.
(i) ξ t is an independent and identically distributed ( i . i . d . ) sequence. (ii) Assume that all the characteristic roots of 1 λ 1 z λ p z p = 0 lie outside the unit circle. (iii) ξ t is in the domain of attraction of a stable law of order κ ( 0 , 2 ) and we have E ( ξ t ) = 0 if κ > 1 and { ξ t } has a distribution symmetric of approximately 0 if κ 1 . The normal distribution corresponds to κ = 2 .
Remark 1.
Under Assumption 1, Phillips [27] have already proved that the scaled partial sums admit a functional central limit theorem, viz., a T 1 t = 1 [ T τ ] ξ t , a T 2 t = 1 [ T τ ] ξ t 2 L κ ( τ ) , 0 τ d L κ 2 , where a T = inf x : P ( | η t | > x ) T 1 and the random variable L κ ( · ) is a Lévy process. Similarly, from the studies by Ibragimov [28] and Resnick [29], it can be concluded that ( a T 1 t = 1 [ T τ ] η t , a T 2 t = 1 [ T τ ] η t 2 ) U κ ( τ ) , 0 τ d U κ 2 , where the random variable U κ ( · ) is a κ-stable process. The exact definition of a κ-stable process appearing in the following is not necessarily known, but the quantities a T can be represented as a T = T 1 / κ L ( T ) for some slowly varying function L ( · ) .
Within Model 1, the sequence y t is a I ( 0 ) process if | ρ t | < 1 , while it is a I ( 1 ) process if ρ t = 1 . We consider four possible scenarios. The first of these is that y t is I ( 1 ) throughout the sample period; that is, ρ t = 1 , t = 1 , 2 , , T . We denote this H 1 . The second, denoted by H 01 , is that y t displays a change in persistence from I ( 0 ) to I ( 1 ) behavior at time τ * T , where [ · ] denotes the integer part of its argument; that is, ρ t = ρ , ρ < 1 for t τ * T and ρ t = 1 for t > τ * T in the context of Model 1 or Model 2. The third, denoted H 10 , is that y t is I ( 1 ) changing to I ( 0 ) at time τ * T . In contrast to the second case, it is ρ t = 1 for t τ * T and ρ t = ρ , ρ < 1 for t > τ * T . The final possibility is that y t is I ( 0 ) throughout; that is, ρ t = ρ , | ρ | < 1 , t = 1 , 2 , , T . We denote this H 0 . Here, the change-point proportion τ * is assumed to be in Λ = [ Λ L , Λ U ] , an interval in ( 0 , 1 ) , which is symmetric around 0.5. Without loss of generality, we can make Λ L = 0.2 and Λ U = 0.8 .
In practice, because both the location and direction of the change-point proportion τ * are unknown in advance, we follow the approach of Leybourne [26] that consider the (two-tailed) test which rejects the large or small values of the statistic formed from the minimized Dickey–Fuller ratio statistic. In order to improve the performance of testing the null hypothesis H 1 against the alternative hypothesis H 01 or H 10 , the modified ratio test statistic is defined by
Ξ = s u p τ Λ T [ τ T ] ρ ^ 2 1 2 s u p τ Λ [ τ T ] ρ ^ 1 1 2 = s u p τ Λ N s u p τ Λ D
where ρ ^ 1 , ρ ^ 2 are LS estimates based on y 1 , , y k and x 1 , , x T k , respectively, and x t = y T t + 1 . For convenience, let ρ t = ρ 1 for t [ τ T ] , ρ t = ρ 2 for t > [ τ T ] .
In the next section, we provide representations for the asymptotic distribution of the Ξ statistic under the constant I ( 1 ) null H 1 and prove that a test that rejects the large (small) values of Ξ is consistent under H 10 ( H 01 ). As a by-product of this, a consistent estimator of τ * is provided. Furthermore, it is shown that the asymptotic distribution of Ξ under H 0 or the near-integrated time series degenerates and this renders the test conservative against the constant I ( 0 ) process or near-integrated time series.

3. Results

We will establish the asymptotic null distribution of the proposed test. Throughout, we use ‘→’ to denote the weak convergence of the associated probability measures, and use U κ ( · ) to denote a stable process with the tail index κ .
Theorem 1.
Suppose that { y t } is generated by Model 1 under H 1 and let Assumption 1 hold. Then, provided that T , we have
Ξ s u p τ Λ L τ , 1 s u p τ Λ L 0 , τ ,
where
L ( a , b ) = ( b a ) a b U κ ( r ) d U κ ( r ) a b U κ 2 ( r ) d r 2 .
Furthermore, if we are interested in the model with a constant or a constant plus linear time trend, the following data-generating process is suggested as:
y t = d t + ε t , ε t = ρ t ε t 1 + η t .
In Model 2, the deterministic kernel d t is either a constant ( d t = μ ) or a constant plus linear time trend ( d t = μ + β t ) , where μ 0 and β 0 . { η t } is a p-order autoregressive process as defined in Model 1. Similarly, ξ t satisfies Assumption 1, while there is an additional restriction on the tail index κ ( 1 , 2 ) . This is because the LS estimates of μ and β are expressed as ( μ ^ μ ) = O p ( a T T 1 ) and ( β ^ β ) = O p ( a T T 2 ) , whose consistency is destroyed if κ ( 0 , 1 ] , resulting in the loss of validity of the subsequent block bootstrap method. Through some algebraic calculation, we can derive the following results under Model 2.
Lemma 1.
Suppose that { y t } is generated by Model 2 under H 1 and let Assumption 1 hold. Let the superscript ζ = 0 , 1 denote the de-meaned d t = μ , and the de-meaned and de-trended d t = μ + β t cases, respectively. Then, provided that T ,
Ξ s u p τ Λ L 1 ζ τ , 1 s u p τ Λ L 2 ζ 0 , τ
where L 1 ζ τ , 1 and L 2 ζ 0 , τ are defined in Appendix A.
Remark 2.
Theorem 1 and Lemma 1 indicate that, under H 1 , although the asymptotic distributions are the functional of the κ-stable process, they have a different story in each model. Moreover, as shown in (2) and (3), the explicit form of asymptotic distribution is complicated and not standard, and depends on the unknown tail index κ. Therefore, the block bootstrap method is used to determine the critical values, which will be introduced in Section 4 below.
In Theorem 2, we detail the large sample behavior of test Ξ under the persistence change alternative hypotheses H 10 and H 01 and give consistent estimators of τ * . The results stated hold for both the Model 1 and Model 2 cases.
Theorem 2.
Let { y t } be generated by Model 1 or Model 2 and Assumption 1 hold. Then,
(i) Under H 10 , we have Ξ = O p T 2 and τ ^ = arg sup τ Λ ( T 1 N ) p τ * ;
(ii) Under H 01 , we have Ξ = O p T 2 and τ ˜ = arg sup τ Λ ( T 1 D ) p τ * ,
where p signifies convergence in probability.
Remark 3.
The results in Theorem 2 imply that a consistent test of H 1 against H 10 ( H 01 ) is obtained from the upper-tail (lower-tail) distribution of test Ξ. A rejection in the upper (lower) tail is indicative of a change from I ( 1 ) to I ( 0 ) (from I ( 0 ) to I ( 1 ) ) because Ξ is consistent at the rate of O p ( T 2 ) under H 10 and tends towards 0 with the O p ( T 2 ) rate under H 01 . This means that the tail that the test rejects will also correctly identify the true direction of change. Therefore, even if the direction of change is unknown, as will typically be the case in practice, it is clear from Theorem 2 that Ξ can also be employed as a two-tailed persistence change test. In addition, the modified numerator (denominator) of test Ξ yields a consistent estimator of the change-point fraction τ * .
In Theorem 3, we now establish the behavior of test Ξ under the constant I ( 0 ) process H 0 ; again, this result applies to both Model 1 and Model 2 cases.
Theorem 3.
Let { y t } be generated by Model 1 or Model 2 and Assumption 1 hold. Then, under H 0 , we obtain Ξ p 1 .
Remark 4.
The straightforward result is that the test Ξ will be conservative when run at conventional significance levels (say, the 5 % level or smaller) and will never reject under H 0 in large samples. That is because, under H 0 , the relevant critical values are lower (in the left-hand tail) and higher (in the right-hand tail) than the value 1.
In this paper, we need to consider the case that ρ t = 1 γ / T , where γ is a real number, that is, the sequence is the near-unit root process. For a Gaussian nearly non-stationary AR(1) model, it is shown that the asymptotic distribution of the LS estimate of ρ t obtained by Chan [30] can be expressed as a functional of the Ornstein–Uhlenbeck process. Based on the above research, the following Theorem 4 gives the asymptotic distribution of Ξ under the heavy-tailed near unit root.
Theorem 4.
Let { y t } be generated by Model 1 and Assumption 1 hold. Then, under ρ t = 1 γ / T , there is
Ξ s u p τ Λ G τ , 1 s u p τ Λ G 0 , τ ,
where
G ( a , b ) = ( b a ) a b X κ ( r ) d U κ ( r ) a b X κ 2 ( r ) d r γ 2 .
and X κ ( r ) satisfies d X κ ( r ) = γ X κ ( r ) d r + d U κ ( r ) , X κ ( 0 ) = 0 .
Similarly, the asymptotic distribution under Model 2 with ρ t = 1 γ / T can also be directly obtained.
Lemma 2.
Suppose { y t } be generated by Model 2 under ρ t = 1 γ / T and let Assumption 1 hold; we only look at the case with d t = μ and have
Ξ s u p τ Λ G 1 τ , 1 s u p τ Λ G 2 0 , τ
where G 1 τ , 1 and G 2 0 , τ are defined in Appendix A.
Remark 5.
Note that, when γ = 0 , Theorem 4 is the result of Theorem 1; thus, Theorem 4 provides an asymptotic result for the infinite variance near-integrated AR(1) model. It can be seen that the asymptotic distribution is not only related to the κ-stable process, but also to parameter γ. Likewise, the conclusion of Lemma 2 is equal to that of Lemma 1 when γ = 0 . It should be noted that Lemma 2 only gives the asymptotic distribution for the case of d t = μ . For d t = μ + β t , it is similar to Lemma 1, and does not be described in detail here. In Section 5, we will give the rejection rates of the proposed test statistic under different γ to verify that our statistic is conservative in the case of the near-unit root and will not accept the assumption of persistence change.

4. Block Bootstrap Approximation

The key implication of Theorem 1 and Lemma 1 is that, under a heavy-tailed sequence, the asymptotic null distributions of the persistence change tests depend on the tail index κ . In practice, the stable index κ is often unknown and a complicated computation procedure is usually involved in estimating it. A cursory way to estimate κ is proposed by Mandelbrot [31], but the accuracy is not enough. To avoid the nuisance parameter κ , we propose the following block bootstrap test.
First calculate the centered residuals
η ^ t = y t ρ ^ y t 1 1 T 1 i = 2 T y i ρ ^ y i 1
for t = 2 , 3 , , T , where ρ ^ is the LS estimate of ρ based on the observed date y 1 , y 2 , , y T .
Choose a positive integer b ( < T ) and let i 0 , i 1 , , i K 1 be drawn i.i.d. with the distribution uniform on the set { 1 , 2 , , T b } ; here, we take K = ( T 1 ) / b , where [ · ] denotes the integer part, although different choices for K are also possible. The procedure constructs a bootstrap pseudo-series y 1 * , y 2 * , , y l * , where l = K b + 1 , as follows:
y t * = y 1 , t = 1 y t 1 * + η ^ i m + s , t = 2 , 3 , , l
where m = ( t 2 ) / b , s = t m b 1 .
Compute the statistic Ξ * and
Ξ * = s u p τ Λ l [ τ l ] ρ ^ 2 * 1 2 s u p τ Λ [ τ l ] ρ ^ 1 * 1 2 = s u p τ Λ N * s u p τ Λ D * ,
where ρ ^ 1 * , ρ ^ 2 * are LS estimates based on y 1 * , , y k * and x 1 * , , x l k * , respectively, and x t * = y l t + 1 * .
Repeating step 2 and step 3 a great number of times (e.g., P times), we obtain the collection of pseudo-statistics Ξ 1 * , Ξ 2 * , , Ξ P * .
Compute the bootstrap α t h ( 1 α t h ) quantile of the empirical distribution of Ξ 1 * , Ξ 2 * , , Ξ P * being greater than T, denoted by Ξ * ( α ) Ξ * ( 1 α ) . We reject the null hypothesis if Ξ > Ξ * ( α ) Ξ < Ξ * ( 1 α ) , because the empirical distribution of Ξ * is an approximation to the sampling distribution of Ξ under null hypothesis and we can say that H 10 ( H 01 ) is true.
Remark 6.
The block bootstrap is a central part in the residual-based block bootstrap procedure; note, however, that the block bootstrap is not directly applied to the y t data, neither its first differences; rather, the pseudo-series y 1 * , y 2 * , , y l * is obtained by randomly integrating the selected blocks of centered residuals η ^ t . Compared with other sampling methods, the block bootstrap method has more advantages for dependent sequences. The reason is that each block retains the dependence of the sequence, but the blocks are independent of each other. So, this resample procedure can accurately reproduce the sampling distribution of the test statistic under the null hypothesis. As in Arcones [32], we present Assumption 2 to ensure the convergence in the probability of bootstrap distribution.
Assumption 2.
As T , b and b / T 0 .
Theorem 5.
If Assumption 1, Assumption 2, and H 1 hold, then under the Model 1, we can derive
Ξ * s u p τ Λ L τ , 1 s u p τ Λ L 0 , τ
where L ( · , · ) is the same as in Theorem 1, which will not be shown in detail here.
Remark 7.
Theorem 5 just gives the convergence of Ξ * under the infinite variance case and guarantees that the block bootstrap method has the same asymptotic distribution under the null hypothesis so that an accurate critical value can be obtained. We omit the block bootstrap method under Model 2, which is similar to Model 1. In the next section, we will demonstrate the feasibility of the block bootstrap in small samples through numerical simulation.

5. Numerical Results

In this section, we use Monte Carlo simulation methods to investigate the finite sample size and power properties of the tests based on the block bootstrap developed in Section 3 and Section 4. Our simulation study is based on samples of sizes 300 and 500, with 3000 replications at the nominal 5 % level. Since the optimal block bootstrap size, b, is difficult to select, we take b = C T 1 / 3 based on the experience, as can be seen in Paparoditis [24], which satisfies Assumption 2 and C is a constant. Here, we set C = 3 and the choice of the best length is not the focus of this article, but its effectiveness has been explained in the aforementioned literature.
We consider the data generated by the following autoregressive integrated moving average process
y t = ρ t y t 1 + η t θ η t 1 , t = 1 , 2 , , T
where y 0 = 0 and { η t } is independent and identically distributed (i.i.d.) in the domain of the attraction of a stable law of order κ ( 0 , 2 ] . The design parameter θ { 0.6 , 0 , 0.6 } .
First of all, Table 1 reports the partial size and power when the innovation process is i.i.d. with ρ t = 1 , t [ τ * T ] and ρ t = ρ 2 , t > [ τ * T ] , and T = 300 . Here, Ξ represents the proposed test statistic in this paper, and Q represents the statistic used by Qin [16]. The empirical size and power, calculated as the rejection frequency of the tests under H 1 and H 10 , are provided for the stable index κ { 2.0 , 1.6 , 1.2 , 0.8 , 0.4 } . It can be seen from Table 1 that all empirical sizes of Ξ and Q tend towards the significance level of 5%. However, the power values of the Ξ are better than that of Q. Especially when the tail index is small, such as when κ = 0.4 , the power values of Q are less than 0.02, which is not enough to reject the null hypothesis, but the power value of the Ξ can reach 0.5. This is enough to show the advantages of the proposed test statistic. Their power values decrease with the decrease in κ because the smaller κ indicates the more outliers. Similarly, their power values decrease with the increase in ρ 2 , but it can be seen that Q is more sensitive to the change in ρ 2 . Therefore, it can be concluded that the proposed statistic in this paper has a more robust test performance for the persistence change under the heavy tail. Next, we present our numerical simulation results in detail.

5.1. Properties of the Tests under the H1, H0, and Near-Unit Root

In this section, we investigate the finite sample size properties of the tests when data are generated by (7) with the constant parameter. Table 2 and Table 3 report the empirical rejection frequencies of the Ξ ( U ) and Ξ ( L ) , for T = 300 and T = 500 , respectively. Where Ξ ( U ) is the right test and Ξ ( L ) is the left test, these are based on the upper tail 5 % and lower tail 5 % , respectively.
We can see the following regularities from the results in Table 2 and Table 3. Firstly, when ρ t = 1 (null hypothesis, H 1 ), the empirical size tends towards the significance level of 5%. As the tail index decreases, the experience size does not fluctuate greatly, for example, when κ = 0.4 , the empirical size of the Ξ ( L ) is 0.0473, 0.0403, and 0.044 for θ = 0.0 , 0.6 , 0.6 , which still tends towards a nominal level. Notice that the empirical size under θ = 0 is better than that under θ = 0.6 or θ = 0.6 , which indicates that the dependency of the innovation process has some influence on the test. Secondly, for the vast majority of the entries in Table 2 and Table 3 pertaining to H 0 (cases where | ρ t | < 1 , H 0 ), the empirical rejection frequencies of both the upper and lower tails Ξ -test are seen to lie well below the nominal level, as predicted by the asymptotic unbiasedness result of Theorem 3. The empirical rejection frequencies increase gradually with the decrease in the tail index, and the empirical rejection frequencies of Ξ ( U ) are generally higher than that of Ξ ( L ) , but still lower than the nominal level. It is worth noting that even when ρ t = 0.9 , the empirical rejection frequencies are much smaller than the nominal level, which indicates that the proposed test statistic is conservative under the I ( 0 ) process. However, when θ = 0.6 , ρ t = 0.5 and κ < 1 , empirical rejection frequencies are severely distorted, such as when κ = 0.4 and T = 300 empirical rejection frequency of Ξ ( U ) is 0.514. What is interesting is that this phenomenon only occurs in the case of θ = 0.6 , and there is no distortion for θ = 0.6 . Finally, in the case of near-integrated time series, the empirical rejection frequencies are not high enough to reject the null hypothesis. Especially when γ = 1 , the empirical rejection frequencies tend to the significance level, which confirms the conclusion of this paper, that is, when γ is smaller, the sequence is closer to the I ( 1 ) process (null hypothesis). Similarly, with the increase in γ , the empirical rejection frequencies of the Ξ ( U ) gradually increase, while that of the Ξ ( L ) gradually decreases. In summary, the straightforward result is that a test based on Ξ will be conservative when run at conventional significance levels and will never be rejected under a near-integrated time series in large samples, regardless of whether this is in i.i.d or a dependent innovation process.

5.2. Properties of the Tests under the H10 and H01

In this section, we report the empirical rejection frequencies of the test when the data are generated according to the I ( 1 ) I ( 0 ) switch DGP (7) with ρ t = 1 , t [ τ * T ] and ρ t = ρ 2 , t > [ τ * T ] . We consider the following values of the autoregressive and breakpoint parameters: ρ 2 { 0.3 , 0.5 , 0.7 } and τ * { 0.3 , 0.5 , 0.7 } , respectively. We only present the results of I ( 1 ) I ( 0 ) and I ( 0 ) I ( 1 ) is similar.
Table 4 and Table 5 report resulting empirical rejection frequencies for the upper-tail and lower-tail Ξ -tests (all tests were run at the nominal 5% level) for samples of size T = 300 and T = 500 , respectively. Table 6 and Table 7 report the Monte Carlo sample mean and sample standard deviation of the corresponding persistence change-point estimators, τ ^ and τ ˜ , respectively. Where τ ^ and τ ˜ are estimators of the true breakpoint under H 10 and H 01 , respectively.
From Table 4 and Table 5, the following conclusions can be obtained on power properties. As expected, the smaller the κ , the smaller the power values of Ξ ( U ) , but even κ < 1 , the power values of Ξ ( U ) are also enough to reject the null hypothesis, for example, when κ = 0.4 , θ = 0 , τ * = 0.7 and T = 300 , the power values of Ξ ( U ) are 0.5557, 0.482, and 0.4097 for ρ 2 = 0.3 , 0.5 , 0.7 . However, although the empirical rejection frequencies of Ξ ( L ) increase with the increase in ‘outliers’, they are also all less than 0.02, that is, the lower-tail Ξ -test will not reject the null hypothesis under H 10 . This suggests that the Ξ tests may be reliable for identifying the direction of persistence change, even if under the small sample or κ < 1 . The empirical power of Ξ ( U ) drops significantly as the changing size ( 1 ρ 2 ) decreases, which is a common conclusion like what other change point test procedures can obtain. It is also worth noting that the power is always higher with a larger τ * , which occurs because of a larger τ * , which means that a greater proportion of the sample contains the I ( 1 ) component. It is clear from the different values of θ that the sensitivity of rejection frequencies to whether the innovation process is independent does not vary considerably. This shows that the proposed test statistic is robust for different θ -values under the alternative hypothesis. As the sample size of T increases, power values become higher and higher in each case of the tests, which proves consistent with the results of Theorem 2. The above conclusions also confirm the effectiveness of the block bootstrap method.
Turning to the results for the breakpoint estimators τ ^ and τ ˜ in Table 6 and Table 7, respectively, a number of comments seem appropriate. First of all, it is clear that τ ^ and τ ˜ appear to converge on the true breakpoint, τ * , as would be expected in Theorem 2. Secondly, the smaller the tail index, the larger the standard deviation. It is not surprising that a smaller tail index implies more ‘outliers’. Moreover, τ ^ ( τ ˜ ) performs best for ρ 2 = 0.3 ( ρ 1 = 0.3 ). This is also unsurprising as this case provides the sharpest distinction in the cases considered between the I ( 1 ) and I ( 0 ) phases of the process. Finally, an interesting finding is that τ ^ performs significantly better than τ ˜ for τ * = 0.7 , slightly better than τ ˜ for τ * = 0.5 , and slightly worse than τ ˜ for τ * = 0.3 .
Although not reported, we also consider the power properties under the corresponding I ( 0 ) I ( 1 ) switch DGP. These experiments yield qualitatively similar results to those observed in Table 4 and Table 5 for Ξ -tests on switching the upper and lower tail and switching τ * for ( 1 τ * ). This is because this model can also be viewed as a process with a switch from I ( 1 ) to I ( 0 ) at ( 1 τ * ) when the data are taken in reverse order.
To summarize, the conclusions to be drawn from the results in this section seem clear. The test procedures introduced in Section 3 and Section 4 provide a functional method to effectively detect I ( 1 ) to I ( 0 ) or I ( 0 ) to I ( 1 ) persistence change for infinite variance sequences. In addition, although the proposed test statistic is based on the I ( 1 ) null hypothesis, the empirical size is still controlled well when a series is I ( 0 ) or a near-integrated time series throughout, and empirical power has good performance compared to Q’s. From the above research, we can also conclude that, for the block bootstrap method, b = 3 T 1 / 3 is a reasonable choice in most cases to effectively control the empirical size and obtain a satisfactory empirical power.

6. Empirical Applications

Especially in many financial time series, the persistence change appears frequently. In this section, we illustrate the proposed test statistic by two examples of time series data, namely data on the US inflation rate and USD/CNY exchange rate which come from multpl.com, and the website of the Federal Reserve Bank of St. Louis, respectively. At the 5% nominal level, we find clear evidence of change from stationary to non-stationary or non-stationary to stationary in these two series.
The first group contains 224 US inflation rate monthly data from September 1958 to April 1977. Figure 1 reports the observations in this set. Chen [33] considered this dataset using a bootstrap test of change in persistence, and concluded that the data contain a change from I ( 0 ) to I ( 1 ) . Moreover, they also derived that the estimated change period from I ( 0 ) to I ( 1 ) is May 1965. In this paper, we apply our proposed block bootstrap-based method to verify this conclusion. First, we perform a first-order difference on the data shown in Figure 1 to obtain Figure 2. The differential data were fitted using software provided by John [34] to obtain a tail index of κ ^ = 1.8454 and its corresponding upper- and lower-tail critical values are 8.5203 and 0.0265 , respectively. As suspected, the existence of a change in persistence is confirmed to be Ξ = 0.0244 < 0.0264 . This indicates that the data undergo a change from I ( 0 ) to I ( 1 ) . The estimated change period from I ( 0 ) to I ( 1 ) is k * = 79 (May 1965) (see the vertical line in Figure 1). This coincides well with the result obtained by Chen [2]’s.
The second group contains a 580 USD/CNY exchange rate from 13 May 2009 to 31 August 2011 (see Figure 3). We again fit its first-order difference data in the same way with a tail index of κ ^ = 1.0515 . This also reflects that the data contain a lot of outliers. Its corresponding upper- and lower-tail critical values are 5.8298 and 0.1202 , respectively. The presence of a persistence change is confirmed as Ξ = 5.8441 > 5.8298 , which also indicates that the data have undergone a change from I ( 1 ) to I ( 0 ) . In this example, on the basis of the estimated break fraction k * = 306 (15 June 2010), it is reasonable to split the whole sample into two regions, where [ 1306 ] is I ( 1 ) and [ 307 , 580 ] is I ( 0 ) . To make our conclusion more reliable, we also applied the test proposed by Kim [5], and we found that, on 16 June 2010, the data change from I ( 1 ) to I ( 0 ) , which further demonstrate the effectiveness of the method we proposed.
Evidently, one may question whether this reject signal was caused by persistence change or those outliers. To test this doubt and make our conclusion more reliable, we also applied our procedure to test the first-order difference data in Figure 2 and Figure 4. The proposed procedure in this paper that used the same parameter choices does not reject the null hypothesis. This result again suggests that the original data undergo a persistence change and the first-order difference data are a stationary sequence. In addition, we divide the two sets of data into two segments according to the estimated break fraction k * and test them separately, before finding that neither of them rejected the null hypothesis. This indicates that there is only one persistence change in both sets of data.
Actually, in the text of the analysis of real-world data, besides for change point models, several ways of analyzing financial time series are possible, such as Cherstvy [35], Yu [36], Kantz [37], who introduced three strategies for the analysis of financial time series based on time-averaged observables, which contained the time-averaged mean-squared displacement as well as the aging and delay time methods for the varying fractions of the financial time series. They found that the observed features of the financial time series dynamics agree well with our analytical results for the time-averaged measurables for geometric Brownian motion, underlying the famed Black–Scholes–Merton model. It was useful for financial data analysis and disclosing new universal features of stock market dynamics.

7. Conclusions

In this paper, we propose a new statistic to test the persistence change with heavy-tailed innovations, and neither the direction of change nor the location of the change can be assumed to be known. We derive the asymptotic distribution of the test statistic under the H 1 that is a complicated functional of the κ -stable process. We also demonstrate that this test is consistent against changes in persistence, either from I ( 1 ) to I ( 0 ) or from I ( 0 ) to I ( 1 ) , and prove the consistency of breakpoint estimators. In particular, to determine the critical values for the null distribution of the test statistic containing unknown tail index κ , we adopt an approach based on the block bootstrap which is a variation on the sampling methodology. The robustness of the block bootstrap method is verified by numerical simulation, and the test obtained displays no tendency of rejecting against constant I ( 0 ) or nearly integrated time series. Empirical applications suggest that our procedures work well in practice. In conclusion, the proposed test statistic based on block bootstrap constitutes a functional tool for detecting changes in persistence with heavy-tailed innovations.

Author Contributions

Methodology, S.Z.; Software, H.J.; Writing—review & editing, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank financial support from NNSF (Nos. 62273272, 71473194), SNSF (No. 2020JM-513).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Theorem 1.
For the present, assume τ is fixed. We start with the following statistic
Ξ ˜ = T [ τ T ] ρ ^ 2 1 2 [ τ T ] ρ ^ 1 1 2 = N D
where ρ ^ 1 , ρ ^ 2 are the LS estimates based on y 1 , y 2 , , y k and x 1 , x 2 , , x T k , respectively.
Firstly, for the numerator of Ξ ˜
N = T [ τ T ] ρ ^ 2 1 2
where ρ ^ 2 = t = 2 T k + 1 x t x t 1 t = 2 T k + 1 x t 2 1 . It is easily shown that
( ρ ^ 2 1 ) = t = 2 T k + 1 y T t + 1 y T t + 2 t = 2 T k + 1 y T t + 1 2 t = 2 T k + 1 y T t + 1 2 = E F .
By using the conclusion about stochastic integral from Knight [38], we can derive
a T 2 · E = a T 2 · t = 2 T k + 1 y T t + 1 y T t + 2 t = 2 T k + 1 y T t + 1 2 = a T 2 t = 2 T k + 1 η T t + 2 y T t + 1 τ 1 U κ ( r ) d U κ ( r )
and
T 1 a T 2 · F = T 1 a T 2 · t = 2 T k + 1 y T t + 1 2 τ 1 U κ 2 ( r ) d r .
Thus, it leads to
T ρ ^ 2 1 τ 1 U κ ( r ) d U κ ( r ) τ 1 U κ 2 ( r ) d r ,
and
T · N = ( 1 τ ) T ( ρ ^ 2 1 ) 2 ( 1 τ ) τ 1 U κ ( r ) d U κ ( r ) τ 1 U κ 2 ( r ) d r 2 .
For the denominator of Ξ ˜ , we have
D = [ τ T ] ρ ^ 1 1 2
where
( ρ ^ 1 1 ) = t = 2 k y t y t 1 t = 2 k y t 1 2 t = 2 k y t 1 2 = A B .
Similarly, we obtain
a T 2 · A = a T 2 · t = 2 k y t y t 1 t = 2 k y t 1 2 = a T 2 t = 2 k η t y t 1 0 τ U κ ( r ) d U κ ( r )
and
T 1 a T 2 · B = T 1 a T 2 · t = 2 k y t 1 2 0 τ U κ 2 ( r ) d r .
Thus, it follows
T · D = τ T ( ρ ^ 1 1 ) 2 τ 0 τ U κ ( r ) d U κ ( r ) 0 τ U κ 2 ( r ) d r 2 .
Therefore, by (A3) and (A4)
Ξ ˜ L ( τ , 1 ) L ( 0 , τ ) .
Using the continuous mapping theorem, the desired result is available
Ξ s u p τ Λ L ( τ , 1 ) s u p τ Λ L ( 0 , τ ) .
 □
Proof of Lemma 1.
Before presenting the proof, we should define the following random processes, where for ζ = 0 ,
L 1 0 ( a , b ) = ( b a ) L 11 0 L 12 0 2 , L 2 0 ( a , b ) = ( b a ) L 21 0 L 22 0 2 , L 11 0 = τ 1 U κ ( r ) d U κ r ( 1 τ ) 1 U κ ( 1 ) U κ ( τ ) τ 1 U κ ( r ) d r , L 12 0 = τ 1 U κ 2 ( τ ) d r ( 1 τ ) 1 τ 1 U κ ( r ) d r 2 , L 21 0 = 0 τ U κ ( r ) d U κ r τ 1 U κ ( τ ) 0 τ U κ ( r ) d r , L 22 0 = 0 τ U κ 2 ( r ) d r τ 1 0 τ U κ ( r ) d r 2 ,
and for ζ = 1 ,
L 1 1 ( a , b ) = ( b a ) L 11 1 L 12 1 2 , L 2 1 ( a , b ) = ( b a ) L 21 1 L 22 1 2 , L 11 1 = τ 1 U κ ( r ) d U κ ( r ) U κ ( 1 ) ( U κ ( τ ) W 3 ( 1 τ ) 1 τ 1 r d U κ ( r ) W 4 ( 1 τ ) 1 τ 1 U κ ( r ) d r W 4 + W 3 W 4 + 1 + τ 2 ( 1 τ ) W 4 2 , L 12 1 = τ 1 U κ 2 ( r ) d r 2 τ 1 U κ ( r ) d r W 3 2 ( 1 τ ) 1 τ 1 ( r ) U κ ( r ) d r W 4 + ( 1 + τ ) W 3 W 4 + ( 1 τ ) W 3 2 , + ( 1 τ ) 2 ( 1 τ 3 ) 3 W 4 2 , L 21 1 = 0 τ U κ ( r ) d U κ ( r ) U κ ( r ) W 1 τ 1 U κ ( τ ) W 2 + W 1 W 2 + 1 2 W 2 2 , L 22 1 = 0 τ U κ 2 ( r ) d r 2 0 τ U κ ( r ) d r W 1 2 τ 1 0 τ r U κ ( r ) d r W 2 + τ W 1 2 + τ 3 W 2 2 + τ W 1 W 2 , with W 1 = 4 τ 1 0 τ U κ ( r ) d r 6 τ 2 0 τ r U κ ( r ) d r , W 2 = 6 τ 1 0 τ U κ ( r ) d r + 12 τ 2 0 τ r U κ ( r ) d r , W 3 = 4 1 τ 1 τ 1 U κ ( r ) d r 6 1 τ 2 τ 1 r U κ ( r ) d r , W 4 = 6 1 τ 1 τ 1 U κ ( r ) d r + 12 1 τ 2 τ 1 r U κ ( r ) d r .
First consider the proof for the case of de-meaned data. Under Model 2, we obtain
( ρ ^ 2 1 ) = t = 2 T k + 1 ε ^ T t + 1 ε ^ T t + 2 t = 2 T k + 1 ε ^ T t + 1 2 t = 2 T k + 1 ε ^ T t + 1 2 , ε ^ T t + 1 = ε T t + 1 + ( μ μ ^ 2 ) ,
for the numerator of the statistic and
( ρ ^ 1 1 ) = t = 2 k ε ^ t ε ^ t 1 t = 2 k ε ^ t 1 2 t = 2 k ε ^ t 1 2 , ε ^ t = ε t + ( μ μ ^ 1 ) ,
for the denominator of statistic, where μ ^ 1 and μ ^ 2 are the LS estimates based on y 1 , y 2 , , y k and x 1 , x 2 , , x T k , respectively. It is shown that
a T 1 ( μ ^ 2 μ ) ( 1 τ ) 1 τ 1 U κ ( r ) d r a n d a T 1 ( μ ^ 1 μ ) τ 1 0 τ U κ ( r ) d r .
For the numerator of statistic, we can obtain
T · N ( 1 τ ) τ 1 U κ ( r ) d U κ r ( 1 τ ) 1 U κ ( 1 ) U κ ( τ ) τ 1 U κ ( r ) d r τ 1 U κ 2 ( τ ) d r ( 1 τ ) 1 τ 1 U κ ( r ) d r 2 2 .
This is because
a T 2 t = 2 T k + 1 ε ^ T t + 1 ε ^ T t + 2 t = 2 T k + 1 ε ^ T t + 1 2 = a T 2 t = 2 T k + 1 ε T t + 1 η T t + 2 + ( μ μ ^ 2 ) t = 2 T k + 1 η T t + 1 τ 1 U κ ( r ) d U κ r ( 1 τ ) 1 U κ ( 1 ) U κ ( τ ) τ 1 U κ ( r ) d r ,
and
T 1 a T 2 t = 2 T k + 1 ε ^ T t + 1 2 = T 1 a T 2 t = 2 T k + 1 ε T t + 1 2 + 2 ( μ μ ^ 2 ) t = 2 T k + 1 ε T t + 1 + ( T k ) ( μ μ ^ 2 ) 2 τ 1 U κ 2 ( τ ) d r ( 1 τ ) 1 τ 1 U κ ( r ) d r 2 .
Similarly, we have
Ξ s u p τ Λ L 1 0 ( τ , 1 ) s u p τ Λ L 2 0 ( 0 , τ ) .
Next, let us think about the proof for the de-meaned and de-trended case. Note that, unlike de-meaned case, under the null, ε ^ T t + 1 = ε T t + 1 + ( μ μ ^ 2 ) + ( T t + 1 ) β β ^ 2 for the numerator of statistic and ε ^ t = ε t + ( μ μ ^ 1 ) + t β β ^ 1 for the denominator of the statistic. Here, μ ^ 1 , β ^ 1 , and μ ^ 2 , β ^ 2 are the LS estimates based on y 1 , y 2 , , y k and x 1 , x 2 , , x T k , respectively.
With the same algebraic calculation, it is shown that
a T 1 ( μ ^ 2 μ ) 4 ( 1 τ ) 1 τ 1 U κ ( r ) d r 6 ( 1 τ ) 2 τ 1 r U κ ( r ) d r = W 3 , a T 1 ( T k ) ( β ^ 2 β ) 6 ( 1 τ ) 1 τ 1 U κ ( r ) d r + 12 ( 1 τ ) 2 τ 1 r U κ ( r ) d r = W 4 , a T 1 ( μ ^ 1 μ ) 4 τ 1 0 τ U κ ( r ) d r 6 τ 2 0 τ r U κ ( r ) d r = W 1 , a T 1 k ( β ^ 1 β ) 6 τ 1 0 τ U κ ( r ) d r + 12 τ 2 0 τ r U κ ( r ) d r = W 2 .
The rest of the proof technique is similar to the de-meaned case, and we can obtain
Ξ s u p τ Λ L 1 1 ( τ , 1 ) s u p τ Λ L 2 1 ( 0 , τ ) .
 □
For the remainder of the Appendix A, we omit the corresponding proofs under Model 2; which are straightforward but tedious and follow the same logical method as those presented under Model 1.
Proof of Theorem 2.
We first consider part ( i ) of the Theorem 2 for the result under H 10 .
Firstly, we consider the numerator of the statistic. When τ τ * , x 1 , x 2 , , x T k is made up of I ( 0 ) and I ( 1 ) , and we have
y T t + 2 = ρ y T t + 1 + η T t + 2 , 1 < t T k * + 1 ; y T t + 1 + η T t + 2 , T k * + 1 < t T k ; a n d | ρ | < 1 .
Furthermore, we can derive
a T 2 · E = a T 2 · t = 2 T k + 1 y T t + 1 y T t + 2 t = 2 T k + 1 y T t + 1 2 = a T 2 · ( ρ 1 ) t = 2 T k * + 1 y T t + 1 2 + t = 2 T k * + 1 η T t + 2 y T t + 1 + t = T k * + 2 T k + 1 η T t + 2 y T t + 1 = O p ( 1 )
and
T 1 a T 2 · F = T 1 a T 2 · t = 2 T k + 1 y T t + 1 2 = T 1 a T 2 · t = 2 T k * + 1 y T t + 1 2 + t = T k * + 2 T k + 1 y T t + 1 2 = O p ( 1 ) .
That is because a T 1 t = 1 k y t , a T 2 t = 1 k y t 2 = O p ( 1 ) , O p ( 1 ) and a T 2 t = 1 k y t 1 η t = O p ( 1 ) under the I ( 0 ) process. The first term was proved by Phillips [27]. For the second term, since y t 1 = j = 0 ρ j η t j 1 , we have t = 1 k y t 1 η t = j = 0 ρ j t = 1 k η t η t j 1 . And, because a T 2 t = 1 k η t η t j 1 = O p ( 1 ) from Phillips [27] and | ρ | < 1 , it follows a T 2 t = 1 k y t 1 η t = O p ( 1 ) .
So, we can obtain
T ρ ^ 2 1 = O p ( 1 ) .
Then, for the numerator of Ξ ˜ , we can receive
T · N = ( 1 τ ) T ( ρ ^ 2 1 ) 2 = O p ( 1 ) .
When τ > τ * , x 1 , x 2 , , x T k is I ( 0 ) . Similarly, we can obtain
a T 2 · E = a T 2 t = 2 T k + 1 y T t + 1 y T t + 2 t = 2 T k + 1 y T t + 1 2 = a T 2 ( ρ 1 ) t = 2 T k + 1 y T t + 1 2 + t = 2 T k + 1 η T t + 2 y T t + 1 = O p ( 1 )
and
a T 2 · F = a T 2 · t = 2 T k + 1 y T t + 1 2 = O p ( 1 ) .
Therefore, it is easily shown that
T 1 · N = ( 1 τ ) ( ρ ^ 2 1 ) 2 = O p ( 1 ) .
Combining (A9) and (A11), we have N = O p ( T 1 ) for τ τ * and N = O p ( T ) for τ > τ * . Therefore, s u p τ Λ N = O p ( T ) .
Then, consider the denominator of the statistic. When τ τ * , y 1 , , y k is I ( 1 ) . Hence, we have
T · D = τ T ( ρ ^ 1 1 ) 2 τ 0 τ U κ ( r ) d U κ ( r ) 0 τ U κ 2 ( r ) d r 2 .
When τ > τ * , y 1 , , y k is made up of the I ( 1 ) and I ( 0 ) process, we have
a T 2 · A = a T 2 · t = 2 k y t y t 1 t = 2 k y t 1 2 = a T 2 · ( ρ 1 ) t = k * + 1 k y t 1 2 + t = 2 k * η t y t 1 + t = k * + 1 k η t y t 1 = O p ( 1 )
and
T 1 a T 2 · B = T 1 a T 2 · t = 2 k y t 1 2 = T 1 a T 2 · t = 2 k * y t 1 2 + t = k * + 1 k y t 1 2 = O p ( 1 ) .
So, we can obtain
T ρ ^ 1 1 = O p ( 1 ) .
Then, for the numerator of Ξ ˜ , we can receive
T · D = τ T ( ρ ^ 1 1 ) 2 = O p ( 1 ) .
(A12) and (A14) imply that s u p τ Λ D = O p ( T 1 ) . Therefore, we obtain Ξ = O p ( T 2 ) .
Let Q T ( τ ) = T 1 · N , then we have the following result:
Q T ( τ ) Q ( τ ) = ( 1 τ ) C 2 I τ τ * .
where C is a constant because, at τ τ * , we have
( ρ ^ 2 1 ) = ( ρ 1 ) + t = 2 T k + 1 η T t + 2 y T t + 1 t = 2 T k + 1 y T t + 1 2 = ( ρ 1 ) + j = 0 ρ j a T 2 t = k + 1 T η t η t j 1 j = 0 i = 0 ρ j ρ i a T 2 t = k + 1 T η t j 1 η t i 1 ( ρ 1 ) + j = 0 ρ j f j + 1 ( 1 ) τ 1 d U κ 2 j = 0 i = 0 ρ j ρ i f i j ( 1 ) τ 1 d U κ 2 = ( ρ 1 ) + c 1 c 2 = C = O p ( 1 ) .
And, (A16) holds because Phillips [27] have shown that a T 2 t = 1 T η t η t + h f h ( 1 ) 0 1 d U κ 2 , and when | ρ | < 1 there is j = 0 ρ j < .
Since
τ ^ = arg sup τ Λ Q T ( τ ) and τ * = arg sup τ Λ Q ( τ )
one can apply an argument similar to Lemma 3 of Amemiya [39]. Let G = ( a , b ) be an open interval in Λ containing τ * . We denote by G ¯ the complement of G in Λ . Then, G ¯ Λ is compact and sup τ G ¯ Λ Q ( τ ) exists. In fact, sup τ G ¯ Λ Q ( τ ) = Q ( a ) . Let δ = sup τ G ¯ Λ Q ( τ ) Q ( τ * ) and E T be the event Q T ( τ ) Q ( τ ) < δ / 2 for all τ ’. Then, it can be shown that the event E T implies that Q ( τ ^ ) Q ( τ * ) < δ which in turn implies that τ ^ G . Therefore, P r E T P r τ ^ G . Since P r E T 1 by the uniform convergence in (A15), we have P r τ ^ G 1 , which implies that τ ^ p τ * . The stated result for Ξ under H 10 then follows immediately.
Next, consider the results in part (ii) relating to H 01 . Under this alternative, it is easily seen that sup τ Λ N = O p ( T 1 ) . For τ τ * , y 1 , , y k is the I ( 0 ) process. Hence, we have
T 1 · D = τ ρ ^ 1 1 2 = O p ( 1 ) .
When τ > τ * , y 1 , , y k is made up of I ( 0 ) and I ( 1 ) . It is easy to show that
a T 2 · A = a T 2 · t = 2 k y t y t 1 t = 2 k y t 1 2 = a T 2 · ( ρ 1 ) t = 2 k * y t 1 2 + t = 2 k * η t y t 1 + t = k * + 1 k η t y t 1 = O p ( 1 )
and
T 1 a T 2 · B = T 1 a T 2 · t = 2 k y t 1 2 = T 1 a T 2 · t = 2 k * y t 1 2 + t = k * + 1 k y t 1 2 = O p ( 1 ) .
So, we obtain
T ρ ^ 1 1 = O p ( 1 ) .
Then, we have the following result:
T · D = τ T ( ρ ^ 1 1 ) 2 = O p ( 1 ) .
From (A17) and (A19), we obtain sup τ Λ D = O p ( T ) . The proof of τ ˜ p τ * is entirely analogous to the proof of τ ^ p τ * and is therefore omitted. Hence, part (ii) of Theorem 2 holds. □
Proof of Theorem 3.
For the numerator and denominator of the statistic, we have
T 1 · N = ( 1 τ ) ( ρ ^ 2 1 ) 2 ( 1 τ ) C 1 2 ,
T 1 · D = τ ( ρ ^ 1 1 ) 2 τ C 2 2 ,
where C 1 = C was defined in (A16). Similarly, for C 2 , we have
( ρ ^ 1 1 ) = ( ρ 1 ) + t = 2 k η t y t 1 t = 2 k y t 1 2 = ( ρ 1 ) + j = 0 ρ j a T 2 t = 2 k η t η t j 1 j = 0 i = 0 ρ j ρ i a T 2 t = 2 k η t j η t i ( ρ 1 ) + j = 0 ρ j f j + 1 ( 1 ) 0 τ d U κ 2 j = 0 i = 0 ρ j ρ i f i j ( 1 ) 0 τ d U κ 2 = ( ρ 1 ) + c 1 c 2 = C = O p ( 1 )
we have
C 1 = C 2 = C .
So, with (A20), (A21), and (A23), we derive that
Ξ sup τ Λ ( 1 τ ) C 2 sup τ Λ τ C 2 = 1 .
 □
Proof of Theorem 4.
Under Model 1, we have
T ρ ^ 1 ρ 1 0 τ X κ ( r ) d U κ ( r ) 0 τ X κ 2 ( r ) d r and T ρ ^ 2 ρ 2 τ 1 X κ ( r ) d U κ ( r ) τ 1 X κ 2 ( r ) d r
which were proved by Chan [30].
Due to ρ 1 = ρ 2 = 1 ( γ / T ) , we can derive
T ρ ^ 1 1 0 τ X κ ( r ) d U κ ( r ) 0 τ X κ 2 ( r ) d r γ and T ρ ^ 2 1 τ 1 X κ ( r ) d U κ ( r ) τ 1 X κ 2 ( r ) d r γ .
Then, Theorem 4 can be obtained immediately. Furthermore, for the results about d t = μ + β t in Lemma 2, the proofs are similar and therefore omitted here. But, we should give the following definitions
G 1 ( a , b ) = ( b a ) G 11 G 12 γ 2 , G 2 ( a , b ) = ( b a ) G 21 G 22 γ 2 , G 11 = τ 1 X κ ( r ) d U κ r ( 1 τ ) 1 U κ ( 1 ) U κ ( τ ) τ 1 X κ ( r ) d r , G 12 = τ 1 X κ 2 ( τ ) d r ( 1 τ ) 1 τ 1 X κ ( r ) d r 2 , G 21 = 0 τ X κ ( r ) d U κ r τ 1 U κ ( τ ) 0 τ X κ ( r ) d r , G 22 = 0 τ X κ 2 ( r ) d r τ 1 0 τ X κ ( r ) d r 2 .
 □
Before giving the proof of Theorem 5, we need to introduce the following two Lemmas.
Lemma A1.
Let
R 1 ( r ) = a l 1 t = 1 l r η t * ,
where η t * = y t * y t 1 * for t = 2 , , l r . If Assumption 1 and Assumption 2 hold, then
R 1 ( r ) = a l 1 t = 1 l r η t * U κ ( r ) .
Proof. 
Without loss of generality, we assume that y 0 = 0 and for 0 r 1 , by construction of the block bootstrap method, we have
a l 1 t = 1 l r η t * = a l 1 y 1 + a l 1 m = 0 M r s = 1 V η ^ i m + s
where M r = [ l r ] 2 / b and V = min b , [ l r ] m b 1 . The fact that
R 1 ( r ) = a l 1 m = 0 M r s = 1 b η ^ i m + s a l 1 s = B + 1 b η ^ i M r + s + O p ( a l 1 ) ,
and sup 0 r 1 a l 1 s = B + 1 b η ^ i M r + s = o p ( 1 ) . We only consider the first term on the right-hand side of (A25) in the following. We first show uniformly that, in r,
a l 1 m = 0 M r s = 1 b η ^ i m + s a l 1 m = 0 M r s = 1 b η i m + s 0
in probability. To establish (A26), use the definitions of η ^ t to verify that
a l 1 m = 0 M r s = 1 b η ^ i m + s = a l 1 m = 0 M r s = 1 b η i m + s 1 T 1 j = 2 T η j + a l 1 ( 1 ρ ^ ) m = 0 M r s = 1 b y i m + s 1 1 T 1 j = 2 T y j 1 .
We then have
E * s = 1 b y i m + s 1 1 T 1 j = 2 T y j 1 = s = 1 b 1 T b t = 1 T b y t + s 1 1 T 1 j = 2 T y j 1 = 1 ( T b ) ( T 1 ) s = 1 b ( T 1 ) t = 1 s 1 y t + t = T b + s T 1 y t + b ( b 1 ) j = 2 T y j 1 = O p b 2 a l ( T b ) 1 .
Similarly,
E * s = 1 b y i m + s 1 1 T 1 j = 2 T y j 1 2 = 1 T b t = 1 T b s = 1 b y t + s 1 1 T 1 j = 2 T y j 1 2 = O p b 2 a l 2 .
Let
T n * = a l 1 m = 0 M r s = 1 b y i m + s 1 1 T 1 j = 2 T y j 1 .
Then, we have E * ( T n * ) 2 = O p ( b T ) . Since we have ( 1 ρ ^ ) = O p ( T 1 ) when ρ = 1 , so
a l 1 ( 1 ρ ^ ) m = 0 M r s = 1 b y i m + s 1 1 T 1 j = 2 T y j 1 = O p b 1 / 2 T 1 / 2 .
From (A27) and (A28), it uniformly follows that, in r,
a l 1 m = 0 M r s = 1 b η ^ i m + s a l 1 m = 0 M r s = 1 b η i m + s 1 T 1 j = 2 T η j 0
in probability. Due to E ( η t ) = 0 , if κ > 1 and the η t s are symmetric at 0, and if κ 1 , then we obtain (A26). Then, by the stationarity of η t , we uniformly obtain that, in r,
a l 1 m = 0 M r s = 1 b η i m + s U κ ( r ) .
 □
By (A26) and (A30), the assertion of Lemma A1 is implied.
Lemma A2.
Let
R 2 ( r ) = a l 2 t = 1 l r η t * 2 ,
where η t * = y t * y t 1 * for t = 2 , , l r . If Assumption 1 and Assumption 2 hold, then
R 2 ( r ) = a l 2 t = 1 l r η t * 2 0 r ( d U κ ) 2 .
The proof is similar to Lemma A1.
Proof of Theorem 5.
For the numerator, note that
( ρ ^ 2 * 1 ) = t = 2 l k + 1 y l t + 1 * y l t + 2 * t = 2 l k + 1 ( y l t + 1 * ) 2 t = 2 l k + 1 ( y l t + 1 * ) 2 .
By Lemmas A1 and A2, we can easily obtain the formulas
a l 2 t = 2 l k + 1 y l t + 1 * y l t + 2 * t = 2 l k + 1 ( y l t + 1 * ) 2 = a l 2 t = 2 l k + 1 η l t + 2 * y l t + 1 * τ 1 U κ ( r ) d U κ ( r ) ,
and
l 1 a l 2 t = 2 l k + 1 ( y l t + 1 * ) 2 τ 1 U κ 2 ( r ) d r .
So, it is straightforward to show that
l · N * = l · l [ l τ ] ( ρ ^ 2 * 1 ) 2 ( 1 τ ) τ 1 U κ ( r ) d U κ ( r ) τ 1 U κ 2 ( r ) d r 2 .
By an application of the continuous mapping theorem and the denominator is dealt with analogously, we can obtain
Ξ * sup τ Λ L ( τ , 1 ) sup τ Λ L ( 0 , τ ) .
 □

References

  1. Busetti, F.; Taylor, A.R. Tests of stationarity against a change in persistence. J. Econom. 2004, 123, 33–66. [Google Scholar] [CrossRef]
  2. Chen, W.; Huang, Z.; Yi, Y. Is there a structural change in the persistence of wti–brent oil price spreads in the post-2010 period? Econ. Model. 2015, 50, 64–71. [Google Scholar] [CrossRef]
  3. Belaire-Franch, J. A note on the evidence of inflation persistence around the world. Empir. Econ. 2019, 56, 1477–1487. [Google Scholar] [CrossRef]
  4. Sibbertsen, P.; Willert, J. Testing for a break in persistence under long-range dependencies and mean shifts. Stat. Pap. 2012, 53, 357–370. [Google Scholar] [CrossRef]
  5. Kim, J.Y. Detection of change in persistence of a linear time series. J. Econom. 2000, 95, 97–116. [Google Scholar] [CrossRef]
  6. Leybourne, S.; Kim, T.H.; Taylor, A.R. Detecting multiple changes in persistence. Stud. Nonlinear Dyn. Econom. 2007, 11. [Google Scholar] [CrossRef]
  7. Leybourne, S.; Taylor, R.; Kim, T.H. Cusum of squares-based tests for a change in persistence. J. Time Ser. Anal. 2007, 28, 408–433. [Google Scholar] [CrossRef]
  8. Cerqueti, R.; Costantini, M.; Gutierrez, L.; Westerlund, J. Panel stationary tests against changes in persistence. Stat. Pap. 2019, 60, 1079–1100. [Google Scholar] [CrossRef]
  9. Kejriwal, M. A robust sequential procedure for estimating the number of structural changes in persistence. Oxf. Bull. Econ. Stat. 2020, 82, 669–685. [Google Scholar] [CrossRef]
  10. Hao, J.; Si, Z. Spurious regression between long memory series due to mis-specified structural break. Commun.-Stat.-Simul. Comput. 2018, 47, 692–711. [Google Scholar]
  11. Hao, J.; Si, Z.; Jinsuo, Z. Modified tests for change point in variance in the possible presence of mean breaks. J. Stat. Comput. Simul. 2018, 88, 2651–2667. [Google Scholar]
  12. Wingert, S.; Mboya, M.P.; Sibbertsen, P. Distinguishing between breaks in the mean and breaks in persistence under long memory. Econ. Lett. 2020, 193, 1093338. [Google Scholar] [CrossRef]
  13. Grote, C. Issues in Nonlinear Cointegration, Structural Breaks and Changes in Persistence. Ph.D. Thesis, Leibniz Universität Hannover, Hannover, Germany, 2020. [Google Scholar]
  14. Mittnik, S.; Rachev, S.; Paolella, M. Stable paretian modeling in finance: Some empirical and theoretical aspects. In A Practical Guide to Heavy Tails; Birkhäuser: Boston, MA, USA, 1998; pp. 79–110. [Google Scholar]
  15. Han, S.; Tian, Z. Bootstrap testing for changes in persistence with heavy-tailed innovations. Commun. Stat. Theory Methods 2007, 36, 2289–2299. [Google Scholar] [CrossRef]
  16. Qin, R.; Liu, Y. Block bootstrap testing for changes in persistence with heavy-tailed innovations. Commun. Stat. Theory Methods 2018, 47, 1104–1116. [Google Scholar] [CrossRef]
  17. Chen, Z.; Tian, Z.; Zhao, C. Monitoring persistence change in infinite variance observations. J. Korean Stat. Soc. 2012, 41, 61–73. [Google Scholar] [CrossRef]
  18. Yang, Y.; Jin, H. Ratio tests for persistence change with heavy tailed observations. J. Netw. 2014, 9, 1409. [Google Scholar] [CrossRef]
  19. Hao, J.; Si, Z.; Jinsuo, Z. Spurious regression due to the neglected of non-stationary volatility. Comput. Stat. 2017, 32, 1065–1081. [Google Scholar]
  20. Hao, J.; Jinsuo, Z.; Si, Z. The spurious regression of AR(p) infinite variance series in presence of structural break. Comput. Stat. Data Anal. 2013, 67, 25–40. [Google Scholar]
  21. Wang, D. Monitoring persistence change in heavy-tailed observations. Symmetry 2021, 13, 936. [Google Scholar] [CrossRef]
  22. Chan, N.H.; Zhang, R.M. Inference for nearly nonstationary processes under strong dependence with infinite variance. Stat. Sin. 2009, 19, 925–947. [Google Scholar]
  23. Cheng, Y.; Hui, Y.; McAleer, M.; Wong, W.K. Spurious relationships for nearly non-stationary series. J. Risk Financ. Manag. 2021, 14, 366. [Google Scholar] [CrossRef]
  24. Paparoditis, E.; Politis, D.N. Residual-based block bootstrap for unit root testing. Econometrica 2003, 71, 813–855. [Google Scholar] [CrossRef]
  25. Chan, N.H.; Wei, C.Z. Asymptotic inference for nearly nonstationary AR(1) processes. Ann. Stat. 1987, 15, 1050–1063. [Google Scholar] [CrossRef]
  26. Leybourne, S.J.; Kim, T.H.; Robert Taylor, A. Regression-based tests for a change in persistence. Oxf. Bull. Econ. Stat. 2006, 68, 595–621. [Google Scholar] [CrossRef]
  27. Phillips, P.C.; Solo, V. Asymptotics for linear processes. Ann. Stat. 1992, 20, 971–1001. [Google Scholar] [CrossRef]
  28. Ibragimov, I.A. Some limit theorems for stationary processes. Theory Probab. Appl. 1962, 7, 349–382. [Google Scholar] [CrossRef]
  29. Resnick, S.I. Point processes, regular variation and weak convergence. Adv. Appl. Probab. 1986, 18, 66–138. [Google Scholar] [CrossRef]
  30. Chan, N.H. Inference for near-integrated time series with infinite variance. J. Am. Assoc. 1990, 85, 1069–1074. [Google Scholar] [CrossRef]
  31. Mandelbrot, B. The variation of some other speculative prices. J. Bus. 1967, 40, 393–413. [Google Scholar] [CrossRef]
  32. Arcones, M.A.; Giné, E. The bootstrap of the mean with arbitrary bootstrap sample size. Ann. IHP Probab. Stat. 1989, 25, 457–481. [Google Scholar]
  33. Chen, Z.; Jin, Z.; Tian, Z.; Qi, P. Bootstrap testing multiple changes in persistence for a heavy-tailed sequence. Comput. Stat. Data Anal. 2012, 56, 2303–2316. [Google Scholar] [CrossRef]
  34. John, P.N. Numerical calculation of stable densities and distribution functions. Commun. Stat. Stoch. Model. 1997, 13, 759–774. [Google Scholar]
  35. Cherstvy, A.G.; Vinod, D.; Aghion, E.; Chechkin, A.V.; Metzler, R. Time averaging, ageing and delay analysis of financial time series. New J. Phys. 2017, 19, 063045. [Google Scholar] [CrossRef]
  36. Yu, Z.; Gao, H.; Cong, X.; Wu, N.; Song, H.H. A Survey on Cyber-Physical Systems Security. IEEE Internet Things J. 2023, 10, 21670–21686. [Google Scholar] [CrossRef]
  37. Kantz, H.; Schreiber, T. Nonlinear Time Series Analysis; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  38. Knight, K. Limit theory for autoregressive-parameter estimates in an infinite-variance random walk. Can. J. Stat. 1989, 17, 261–278. [Google Scholar] [CrossRef]
  39. Amemiya, T. Regression analysis when the dependent variable is truncated normal. Econom. Econom. Soc. 1973, 41, 997–1016. [Google Scholar] [CrossRef]
Figure 1. US inflation rate data y t , t = 1 , , 224 .
Figure 1. US inflation rate data y t , t = 1 , , 224 .
Mathematics 12 00258 g001
Figure 2. First−order difference in inflation rate data.
Figure 2. First−order difference in inflation rate data.
Mathematics 12 00258 g002
Figure 3. USD/CNY exchange rate data, y t , t = 1 , , 580 .
Figure 3. USD/CNY exchange rate data, y t , t = 1 , , 580 .
Mathematics 12 00258 g003
Figure 4. First-order difference in the exchange rate data.
Figure 4. First-order difference in the exchange rate data.
Mathematics 12 00258 g004
Table 1. Empirical rejection frequencies of Ξ and Q.
Table 1. Empirical rejection frequencies of Ξ and Q.
κ = 2.0 κ = 1.6 κ = 1.2 κ = 0.8 κ = 0.4
Ξ Q Ξ Q Ξ Q Ξ Q Ξ Q
(a) Empirical Size0.05430.04800.04330.05030.05270.04870.05130.04430.05270.0437
(b) Power Values
τ * ρ 2
0.3 0.3 0.95170.74130.86530.57170.75070.40430.59070.29330.39230.0010
0.5 0.91700.66900.81330.51470.69430.36330.53070.22800.36370.0003
0.7 0.82470.52730.71700.39270.60030.26600.44800.17570.30600.0003
0.5 0.3 0.99770.91700.97500.79770.91670.65700.78430.43870.49570.0023
0.5 0.99100.87300.95670.74200.87270.56670.71770.39230.42700.0017
0.7 0.94000.74100.88600.59000.77770.43970.61600.30830.37630.0010
0.7 0.3 0.99730.92730.98600.82500.95970.67230.86030.48330.54070.0153
0.5 0.98600.85570.97070.74130.91270.57770.77930.41030.46670.0090
0.7 0.92800.67570.87930.58230.80270.44300.64430.29570.40570.0023
Table 2. Properties of the tests under the H 1 , H 0 , and near-unit root.
Table 2. Properties of the tests under the H 1 , H 0 , and near-unit root.
κ = 2.0 κ = 1.6 κ = 1.2 κ = 0.8 κ = 0.4
θ ρ t Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L )
0000000.004300.014000.03100.0003
0.50000000.00100.00130.00130.0047
0.90.005000.004700.00630.00370.00930.00500.00570.0167
1.00.05430.04730.04330.05300.05270.04430.05130.04800.05270.0473
0.6000000.001700.00800.00070.05930
0.5000.004000.060300.20430.00100.51400.0007
0.90000000.00230.00200.01000.0083
1.00.05070.02630.04430.01800.04230.02270.05030.04670.05270.0403
−0.6000000.005300.02370.00030.04930.0017
0.5000.004300.028700.046700.00400.0030
0.90.017000.014700.01630.00030.01500.00200.01770.0083
1.00.05400.05170.05500.05370.05270.02400.05700.02770.04500.0440
θ γ ρ t = 1 γ / T
010.07870.03670.06470.03430.07100.04300.05500.04870.03870.0463
30.09670.01530.07770.01600.06970.02970.05530.03030.02730.0320
50.09830.00770.07270.00670.05920.01900.03800.02130.02370.0327
0.610.06070.00530.04670.00500.05100.01330.05470.01570.06370.0287
30.08030.00030.05030.00070.04570.00330.04870.00970.05400.0203
50.048000.028000.03130.00130.03730.00670.04170.0240
−0.610.07470.02930.09430.03300.08530.01630.07100.02330.05370.0353
30.11930.01370.12630.02100.09000.00770.08530.01670.05670.0250
50.12030.00930.11270.01130.09070.00400.08700.01070.04330.0223
Table 3. Empirical rejection frequencies for T = 500 .
Table 3. Empirical rejection frequencies for T = 500 .
κ = 2.0 κ = 1.6 κ = 1.2 κ = 0.8 κ = 0.4
θ ρ t Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L )
0000000.003000.01070.00030.02730.0007
0.500000000.00030.00030.0037
0.9000.001300.00430.00070.00730.00330.00330.0110
1.00.05570.04700.04630.05870.04070.05830.04970.04530.04570.0507
0.6000000.001700.008000.04800.0003
0.5000.006300.076000.217300.52330.0003
0.90000000.00070.00030.00500.0070
1.00.05370.03700.04730.02770.04130.03230.05600.04700.05170.0490
−0.6000000.004000.01130.00070.03070.0010
0.5000.006000.024300.058000.00300.0020
0.90.001000.004700.00830.00030.01130.00170.00770.0090
1.00.05530.05170.05630.04330.05370.04530.05600.02700.04470.0473
θ γ ρ t = 1 γ / T
010.08200.03230.06470.03530.05930.05070.05270.03800.03470.0493
30.09870.01430.08230.01570.06900.02670.05070.03100.03500.0470
50.10530.00770.07630.00570.07200.01570.04130.02200.01900.0340
0.610.07330.01800.05500.00770.05030.01170.05970.01930.05700.0313
30.08370.00130.055000.05170.00300.05230.00970.05930.0223
50.067000.037000.03630.00070.04900.00600.05500.0183
−0.610.08500.03700.08300.04300.07370.03730.06900.02000.04900.0337
30.11870.01470.12000.02270.10100.01470.09430.01470.04870.0290
50.12070.00700.11530.01600.08900.00430.08570.01170.04000.0227
Table 4. Empirical rejection frequencies under H 10 for T = 300 .
Table 4. Empirical rejection frequencies under H 10 for T = 300 .
κ = 2 κ = 1.6 κ = 1.2 κ = 0.8 κ = 0.4
θ τ * ρ 2 Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L )
00.30.30.951700.865300.750700.601000.39230.0027
0.50.917000.813300.694300.53070.00070.36370.0030
0.70.824700.717000.600300.44800.00130.31930.0073
0.50.30.997700.975000.916700.784300.49570.0047
0.50.991000.956700.872700.71770.00070.43900.0067
0.70.940000.886000.782700.61600.00170.38170.0087
0.70.30.997300.990700.959700.860300.55570.0080
0.50.986000.970700.912700.77930.00070.48200.0083
0.70.928000.879300.80600.00030.64430.00300.40970.0107
0.60.30.30.571000.492300.495300.548700.63800.0023
0.50.563700.494700.532300.63370.00070.71500.0013
0.70.543700.448300.450300.54700.00030.66430.0020
0.50.30.850700.769300.745700.73470.00030.64630.0047
0.50.828700.729000.735300.74970.00070.70000.0063
0.70.752000.643700.631700.64230.00200.63400.0063
0.70.30.866300.806700.760000.74300.00030.61770.0067
0.50.822700.720700.688700.66170.00170.59400.0067
0.70.730300.634300.574000.55370.00200.50670.0137
−0.60.30.30.986700.942000.871000.782700.61270.0023
0.50.959300.906700.817300.686700.40930.0017
0.70.888300.810000.668000.519300.36400.0043
0.50.30.999300.990300.955700.880300.61030.0040
0.50.996300.979300.904300.760700.46930.0033
0.70.967700.917000.790700.63030.00070.39900.0047
0.70.30.999300.994000.963300.870300.57470.0033
0.50.995700.981300.898300.748300.47500.0050
0.70.957300.924700.783700.61430.00070.39130.0080
Table 5. Empirical rejection frequencies under H 10 for T = 500 .
Table 5. Empirical rejection frequencies under H 10 for T = 500 .
κ = 2 κ = 1.6 κ = 1.2 κ = 0.8 κ = 0.4
θ τ * ρ 2 Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L ) Ξ ( U ) Ξ ( L )
00.30.30.992000.921700.832700.657700.43130.0017
0.50.980300.907300.779700.604000.39830.0030
0.70.935700.826300.706000.55200.00070.34570.0047
0.50.3100.987000.956300.857300.56370.0030
0.50.999000.982300.923700.78600.00070.48900.0067
0.70.990700.957000.872000.69930.00100.42800.0063
0.70.3100.999000.979700.923000.61800.0023
0.5100.992700.953700.856700.53930.0060
0.70.987700.963300.89800.00070.75500.00070.46270.0107
0.60.30.30.721300.636700.594700.617300.69170.0020
0.50.725300.631000.635000.701000.74870.0017
0.70.696300.596300.588700.626700.71870.0030
0.50.30.942700.886300.835000.801300.68100.0053
0.50.939700.859300.822300.813700.73630.0053
0.70.896700.796000.774300.705300.69270.0043
0.70.30.969300.917000.849000.817700.66670.0053
0.50.939300.872000.801300.755700.64130.0060
0.70.888700.788000.702700.64900.00030.55130.0097
−0.60.30.30.999300.982000.919000.844000.65300
0.50.995000.959700.887300.736700.46000.0020
0.70.958300.906700.766000.625000.39830.0037
0.50.3100.998700.979300.902300.67100.0007
0.5100.995700.939700.812700.50870.0037
0.70.995300.974700.874300.710700.45630.0057
0.70.3100.998700.980700.914300.62900.0017
0.5100.995300.933700.803300.51230.0037
0.70.996700.972000.863700.701300.44970.0050
Table 6. Monte Carlo mean and standard error of τ ^ , T = 100 .
Table 6. Monte Carlo mean and standard error of τ ^ , T = 100 .
κ = 2 κ = 1.6 κ = 1.2 κ = 0.8 κ = 0.4
τ * ρ 2 τ ^ ¯ se ( τ ^ ) τ ^ ¯ se ( τ ^ ) τ ^ ¯ se ( τ ^ ) τ ^ ¯ se ( τ ^ ) τ ^ ¯ se ( τ ^ )
0.30.30.33520.08690.33570.10050.35390.13630.39100.19090.43350.2344
0.50.36980.13860.36390.13900.36480.15010.39040.18690.42760.2272
0.70.40970.18950.39960.18170.38490.17310.36970.16610.36080.1719
0.50.30.53350.08410.52770.08910.52630.10870.52790.13500.52170.1578
0.50.55380.10930.53930.10940.53780.12060.52460.13770.52000.1589
0.70.57080.14360.55640.14410.53590.14870.51150.15770.47370.1765
0.70.30.71850.05570.71030.06180.69410.09730.68310.11940.64980.1749
0.50.72010.07320.71020.08670.69630.10710.68150.13170.65330.1748
0.70.71880.10590.70620.12150.69370.13970.66420.17360.62940.2082
Table 7. Monte Carlo mean and standard error of τ ˜ , T = 100 .
Table 7. Monte Carlo mean and standard error of τ ˜ , T = 100 .
κ = 2 κ = 1.6 κ = 1.2 κ = 0.8 κ = 0.4
τ * ρ 2 τ ^ ¯ se ( τ ^ ) τ ^ ¯ se ( τ ^ ) τ ^ ¯ se ( τ ^ ) τ ^ ¯ se ( τ ^ ) τ ^ ¯ se ( τ ^ )
0.30.30.33320.07940.35280.10860.38680.15390.44500.21660.51060.2814
0.50.34620.10420.36740.13450.39710.17000.45110.22730.50890.2844
0.70.37020.14930.38820.16780.41010.19010.45250.23000.50510.2831
0.50.30.51770.08980.53120.11530.56050.14300.58860.18050.61840.2152
0.50.51340.11340.52190.12970.54730.15400.57660.18430.59750.2131
0.70.50550.14570.51930.15340.53660.16550.56430.18740.57470.2126
0.70.30.70280.08940.70880.10680.70360.13580.70810.15480.68710.1900
0.50.67970.12730.68760.13650.68120.15950.67980.17740.65940.2073
0.70.64150.17730.64080.18580.64070.19800.63920.20780.62760.2226
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S.; Jin, H.; Su, M. Modified Block Bootstrap Testing for Persistence Change in Infinite Variance Observations. Mathematics 2024, 12, 258. https://doi.org/10.3390/math12020258

AMA Style

Zhang S, Jin H, Su M. Modified Block Bootstrap Testing for Persistence Change in Infinite Variance Observations. Mathematics. 2024; 12(2):258. https://doi.org/10.3390/math12020258

Chicago/Turabian Style

Zhang, Si, Hao Jin, and Menglin Su. 2024. "Modified Block Bootstrap Testing for Persistence Change in Infinite Variance Observations" Mathematics 12, no. 2: 258. https://doi.org/10.3390/math12020258

APA Style

Zhang, S., Jin, H., & Su, M. (2024). Modified Block Bootstrap Testing for Persistence Change in Infinite Variance Observations. Mathematics, 12(2), 258. https://doi.org/10.3390/math12020258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop