Next Article in Journal
Existence of Viscosity Solutions for Weakly Coupled Cooperative Parabolic Systems with Fully Nonlinear Principle Part
Previous Article in Journal
Multidimensional Hyperbolic Equations with Nonlocal Potentials: Families of Global Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Testing a Class of Piece-Wise CHARN Models with Application to Change-Point Study

by
Youssef Salman
1,2,†,
Joseph Ngatchou-Wandji
3,*,† and
Zaher Khraibani
2
1
Mines Saint-Etienne, CNRS, UMR 6158 LIMOS, Institut Henri Fayol, University Clermont Auvergne, 42023 Saint-Etienne, France
2
Department of Applied Mathematics, Faculty of Sciences, Lebanese University, Beirut 2038 1003, Lebanon
3
EHESP Rennes and Institut Élie Cartan de Lorraine, CEDEX, 54506 Vandoeuvre-lès-Nancy, France
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(13), 2092; https://doi.org/10.3390/math12132092
Submission received: 3 June 2024 / Revised: 29 June 2024 / Accepted: 1 July 2024 / Published: 3 July 2024
(This article belongs to the Section Probability and Statistics)

Abstract

:
We study a likelihood ratio test for testing the conditional mean of a class of piece-wise stationary CHARN models. We establish the locally asymptotically normal (LAN) structure of the family of likelihoods under study. We prove that the test is asymptotically optimal, and we give an explicit form of its asymptotic local power. We describe an algorithm for detecting change points and estimating their locations. The estimates are obtained as time indices, maximizing the estimate of the local power. The simulation study we conduct shows the good performance of our method on the examples considered. This method is also applied to a set of financial data.
MSC:
62M10; 62M02; 62M05; 62F03; 62F05

1. Introduction

Let d , p , k , n N and k < < n . Assume the observations X 1 , , X n issued from the following piece-wise stationary CHARN model (see, e.g., [1])
X t = T ( ρ 0 + γ ω ( t ) ; X t 1 ) + V ( X t 1 ) ε t , t Z ,
with
X t = Y t , j = T ( ρ 0 + γ j ω j ( t ) ; X t 1 , j ) + V ( X t 1 , j ) ε t , τ j 1 t < τ j , j = 1 , , k + 1 ,
where for j = 1 , , k ,   ( Y t , j ) t Z is a stationary and ergodic process; ρ 0 R p , T ( ρ 0 , . ) and V ( . ) are real-valued functions with inf x R d V ( x ) > 0 ; the τ j , j = 0 , , k + 1 , are potential instants of changes with τ 0 = 1 and τ k + 1 = n + 1 ; for j = 1 , , k ,   X t , j = ( Y t , j , , Y t d + 1 , j ) , X τ j 1 + = X τ j 1 + , j , = 0 , , d 1 and for t [ τ j 1 + d 1 , τ j ) , X t = ( X t , , X t d + 1 ) ; for j , = 1 , , k , j , the processes ( Y t , j ) t Z and ( Y t , ) t Z are mutually independent; ε t t Z is standard white noise with density f. γ = γ 1 , , γ k + 1 , γ j R p , j = 1 , , k + 1 ; ω ( t ) = ( 1 [ τ 0 , τ 1 ) ( t ) , 1 [ τ 1 , τ 2 ) ( t ) , , 1 [ τ k 1 , τ k ) ( t ) , 1 [ τ k , τ k + 1 ) ( t ) ) = ( ω 1 ( t ) , , ω k + 1 ( t ) ) { 0 , 1 } k + 1 ; for γ = γ 1 , , γ k + 1 and ω ( t ) = ω 1 ( t ) , , ω k + 1 ( t ) , γ ω ( t ) stands for γ ω ( t ) = γ 1 ω 1 ( t ) + + γ k + 1 ω k + 1 ( t ) R p , and γ i ω i = ( γ i , 1 ω i , , γ i , p ω i ) R p . The class of models (2) is very large. It contains models such as AR(p), ARCH(p), EXPAR(p), and GEXPAR(p) whose statistical and probability properties are widely studied in the literature (see, e.g., [2] for a study of the ergodicity of GEXPAR models).
As noted in [3], the assumption that ( X t , j ) t Z and ( X t , ) t Z are independent can be extended to some weak dependence assumption. In this paper, for γ 0 R p ( k + 1 ) and β R p ( k + 1 ) depending on the τ j s, we construct a likelihood ratio test for testing
H 0 : γ = γ 0 against H β ( n ) : γ = γ n = γ 0 + β n .
A particular case of this work is studied in [4]. The literature on change points is extensive and varied. Some basic notions and theory are presented in [5], where one can find number of references on the first works on the subject. Most of the recent papers on change points are in time series or regression contexts. Various methods and techniques are used for the study. Ref. [6] proposes a test for parameter changes. The observations are assumed to follow an exponential distribution. The author presents a derivation using the method of [7]. Ref. [8] studies the problem of changes in the parameters of AR models and the variance in the white noise using the likelihood ratio statistic. Ref. [9] proposes test statistics for detecting a break in the trend function of a dynamic univariate time series. The tests are based on the mean and exponential statistics of [10] and the supremum statistic of [11]. Another method for detecting change points is introduced in [12]. The authors present a multiple-change-point analysis for which the Markov Chain Monte Carlo (MCMC) sampler plays a fundamental role. They propose an attractive methodology for the change-point problem in a Bayesian context. The reversible jump algorithm is presented. Ref. [13] also studies the problem of detecting change points in the mean of a signal corrupted by additive noise. The number of change points is estimated by a method based on a penalized least-square criterion. Ref. [14] uses the minimum description length for detecting change points for a non-stationary time series with an application to GARCH models, stochastic volatility models and generalized state-space models as the parametric model for the segments. Ref. [15] uses maximum likelihood to estimate the instant of the change. The authors study the asymptotic distribution of their test by contiguity. Ref. [16] investigates the regression function or its ν th derivative in generalized linear models which may have a change (discontinuity) point at an unknown location. Ref. [17] studies change points in the mean of a sequence of independent normally distributed random vectors. The asymptotic distribution of the test statistic is studied by using results from [18]. Also, Ref. [19] studies this problem for independent normal means as a multiple testing problem. The authors consider two stepwise methods, the binary segmentation method of [20] and the maximum residual down method of [21]. They prove the consistency of these methods. Ref. [22] studies the existence of changes in the regression parameters in a linear model where the regressors and errors are weakly dependent. They study the asymptotic distribution under the null hypothesis and under contiguous alternatives. In [23], the authors develop a method for detecting and estimating change points in the tail of multiple time series data. They discuss the effect of the mean and variance’s change on the tails. They focus on the detection of change points in the upper tail of the distribution of the variable of interest, based on multiple cross-sectional time series. Ref. [24] proposes a procedure based on the Bayesian information criterion ( B I C ) in combination with the binary segmentation algorithm to look for changes in the mean, autoregressive coefficients, and variance in the perturbation in piecewise autoregressive processes. The authors explain briefly the Auto-PARM and Auto-SLEX methods. They present different algorithms useful to the search of multiple change points. Ref. [25] proposes a likelihood ratio scan method for estimating multiple change points in piecewise stationary processes. Ref. [26] aims to estimate the instant of change in a regression model. The authors use a sequential Bayesian change-point algorithm that provides uncertainty bounds on both the number and location of the change. A class of change-point test statistics is proposed in [27] that utilizes a weighting and trimming scheme for the cumulative sum (CUSUM) process inspired by Renyi. Using an asymptotic analysis and simulations, the authors demonstrate that this class of statistics possesses superior power compared to traditional change-point statistics based on the CUSUM process, when the change point is near the beginning or end of the sample. The authors develop a generalization of these "Renyi” statistics for testing for changes in the parameters of linear and non-linear regression models, and in the generalized method of moment estimation.
In this paper, we are interest in weak change detection. A weak change is one with a too-small magnitude. Such a change may be a harbinger signaling a forthcoming critical behavior of the phenomenon studied. It can manifest in various domains including economics and finance, public health, bio-science, engineering, climatology, hydrology, linguistics, genomics, signal processing and many others.
Classical change detection methods can fail in detecting weak changes. Therefore, it may be of importance to develop new methods for their detection. In the context of time series, very few studies have tested no change against local alternatives of weak changes. Refs. [4,28] study this problem for the case of testing the mean of the model (1). As changes can happen elsewhere than the mean, it can be interesting to study more general models. Our main purpose in this paper is to extend these works to the conditional mean of (1). With this purpose, we proceed with the same techniques. We first construct a test based on the likelihood ratio, and we study its null distribution. Next, we establish the LAN property for the likelihood families under study. From this, we prove the contiguity of H 0 and H β ( n ) and use it together with Le Cam’s third lemma to find the asymptotic distribution of the test under H β ( n ) . Then, we prove the optimality of our test in the case in which the parameters are known. In the case that the parameters are assumed to be unknown, we prove the convergence of the estimated version of the central sequence based on the parameter estimators to its true version. Finally, we prove that the test remains optimal in the case of unknown parameters. Based on the explicit expression of the power, we construct an algorithm for detecting change points and estimating their locations. The simulation study shows the good performance of our method for detecting weak changes and estimating their locations in the examples considered.
In Section 2, we specify the notation and list some of the main assumptions. In Section 3, we state the theoretical results in the case that ρ 0 is known and in the case that it is unknown. The results of this section are used in Section 4 to construct an algorithm for testing change points and estimating their locations. In Section 5, a simulation experiment is conducted for the application of our algorithm. Section 6 concludes our work, and the last section contains the proofs of the results stated in Section 3.

2. Notation and Assumptions

In this section, we specify the notation and list some of the main assumptions needed.

2.1. The Notation

In the sequence, M m , n ( R ) is the space of real m × n matrices and M n ( R ) = M n , n ( R ) . M is the transpose of M M m , n ( R ) , and | | | M | | | m × n is its Euclidean matrix norm. . p is the Euclidean norm of R p .
Let U R p ( k + 1 ) ; we write U = U 1 , , U k + 1 and for any i { 1 , , k + 1 } ,
U i = U i 1 , , U i p .
For M M p ( k + 1 ) ( R ) , we write
M = M 1 , 1 M 1 , k + 1 M k + 1 , 1 M k + 1 , k + 1 ,
where for i , j { 1 , , k + 1 } , M i , j M p ( R ) .
Let : R d R be a differentiable function on R d . For any γ R p ( k + 1 ) , we denote by D γ [ ] the following matrix:
D γ [ ( x ) ] = γ 1 [ ( x ) ] , , γ k + 1 [ ( x ) ] R p ( k + 1 ) ,
where γ i [ ( x ) ] is the gradient of with respect to γ i at x R d :
γ i [ ( x ) ] = γ i , 1 ( x ) , γ i , 2 ( x ) , , γ i , p ( x ) R p ,
where for i = 1 , , k + 1 and j = 1 , , p , γ i , j is the partial derivative of with respect to γ i , j .
We also denote by H γ ( x ) the matrix
H γ ( x ) = γ 1 γ 1 2 ( x ) γ 1 γ k + 1 2 ( x ) γ k + 1 γ 1 2 ( x ) γ k + 1 γ k + 1 2 ( x ) M p ( k + 1 ) ( R ) , x R d
where, for every i { 1 , , k + 1 } ,
γ i γ j 2 ( x ) = 2 γ i , 1 γ j , 1 ( x ) 2 γ i , p γ j , 1 ( x ) 2 γ i , 1 γ j , p ( x ) 2 γ i , p γ j , p ( x ) M p ( R ) , x R d .
We denote any differentiable function g with derivative g by
ϕ g = g g and   I ( g ) = R ϕ g 2 ( x ) g ( x ) d x .
For any t { 1 , , n } , let F t = σ ( X 1 , , X t ) be the σ -algebra generated by X 1 , , X t such that ε t is independent of F t 1 .

2.2. The Main Assumptions

In this section, we outline the key assumptions needed for our methodology. These are crucial for establishing our theoretical results. Following their enumeration, we include a remark that articulates their significance. So, we assume that
(A1
R x f ( x ) d x = 0 and R x 2 f ( x ) d x = 1 .
(A2
f is differentiable with derivative f .
(A3
lim x + f ( x ) = lim x f ( x ) = 0 = lim x + f ( x ) = lim x f ( x ) .
(A4
ϕ f is differentiable with derivative ϕ f and is c ϕ -Lipschitz where 0 < c ϕ < + .
(A5
max R | ϕ f ( x ) | 3 f ( x ) d x , R ϕ f ( x ) f ( x ) d x < .
(A6
For any j = 1 , , k + 1 , n j ( n ) designates the number of observations between the instants τ j and τ j 1 , such that n j ( n ) + and n j ( n ) / n α j , as n tends to + .
(A7
For all j = 1 , , k + 1 , the sequence ( X t ) t Z is stationary and ergodic on [ τ j 1 , τ j ) with stationary cumulative distribution function F j .
(A8
For any j = 1 , , k + 1 , 1 h m p and b 3 ,
η j , b ( h , m ) ( ρ 0 , γ 0 ) = I ( f ) R 1 V ( x ) b T γ j , h ( ρ 0 , γ 0 , x ) T γ j , m ( ρ 0 , γ 0 , x ) d F j ( x ) < .
(A9
max sup γ | T ( ρ 0 , γ , x ) | , sup γ γ [ T ( ρ 0 , γ , x ) ] p ( k + 1 ) ,
sup γ | | | H γ [ T ( ρ 0 , γ , x ) ] | | | p ( k + 1 ) < ν ( x ) , for some positive function ν defined on R d .
(A10
For j = 1 , , k + 1 ,   a , b { 1 , 2 , 3 } , R d ν ( x ) b V ( x ) a d F j ( x ) < .
(A11
The density function of the first d observations on each interval [ τ j 1 , τ j ) , j = 1 , , k + 1 , under H β ( n ) converges to its density function under H 0 .
Remark 1.
  • Assumptions (A1)–(A5) are regularity properties required for the density f. They are satisfied at least by the standard Gaussian density function.
  • Assumption (A6) allows for the application of the ergodic theorem on each segment [ τ j 1 , τ j ) . This assumption is very usual in the literature.
  • Assumption (A7) ensures the ergodicy and stationarity of the process on each segment [ τ j 1 , τ j ) . It holds at least for piece-wise stationary and ergodic AR and ARCH models.
  • Assumptions (A8)–(A10) are constraints on the function T and its derivatives. They are satisfied by usual models as parametric AR, ARCH, TARCH, and EXPAR models with Gaussian noise.
  • Assumption (A11) allows for the simplification of the forms of the likelihoods.

3. The Theoretical Results

3.1. The Parameters Are Known

We first study the case where ρ 0 and γ 0 are assumed to be known. This will enlighten the case where they are unknown. We start by establishing a LAN and contiguity results.
We denote by Θ n ( ρ 0 , γ 0 , β ) the log-likelihood ratio of H 0 against H β ( n ) , and we define the sequence Π n by
Π n ( ρ 0 , γ 0 , β ) = 1 n j = 1 k + 1 t = τ j 1 τ j 1 V ( X t 1 ) β j N ( ρ 0 , γ 0 , X t 1 ) ϕ f [ ε t ( ρ 0 , γ 0 ) ] ,
where
N ( ρ 0 , γ 0 , X t 1 ) = ω ( t ) D γ T ( ρ 0 , γ 0 , X t 1 ) R p ,
and for all γ = ( γ 1 , , γ k + 1 ) R p ( k + 1 ) ,
ε t ( ρ 0 , γ ) = X t T ( ρ 0 , γ , X t 1 ) V ( X t 1 ) , t Z .
Theorem 1 (LAN)
Assume that (A1)–(A10) hold. Then, for any β R p ( k + 1 ) , under H 0 , as n + ,
Θ n ( ρ 0 , γ 0 , β ) = Π n ( ρ 0 , γ 0 , β ) η ( ρ 0 , γ 0 , β ) 2 + o p ( 1 ) , Π n ( ρ 0 , γ 0 , β ) D N ( 0 , η ( ρ 0 , γ 0 , β ) ) ,
with
η ( ρ 0 , γ 0 , β ) = j = 1 k + 1 α j 1 h m p β j , h β j , m η j , 2 ( h , m ) ( ρ 0 , γ 0 ) , η j , 2 ( h , m ) ( ρ 0 , γ 0 ) = I ( f ) R d 1 V 2 ( x ) T γ j , h ( ρ 0 , γ 0 , x ) T γ j , m ( ρ 0 , γ 0 , x ) d F j ( x ) .
Proof. 
See Appendix A. □
Corollary 1.
Assume that (A1)–(A10) hold. Then, for any β R p ( k + 1 ) , the sequences { H β ( n ) : n 1 } and { H 0 ( n ) = H 0 : n 1 } are contiguous. Furthermore, under H β ( n ) , as n + ,
Π n ( ρ 0 , γ 0 , β ) D N ( η ( ρ 0 , γ 0 , β ) , η ( ρ 0 , γ 0 , β ) ) .
Proof. 
See Appendix A. □
For known ρ 0 R p and γ 0 R p ( k + 1 ) , and for any β R p ( k + 1 ) , for testing H 0 against H β ( n ) , we base our test on the statistic
T n ( ρ 0 , γ 0 , β ) = Π n ( ρ 0 , γ 0 , β ) ϑ ^ n ( ρ 0 , γ 0 , β ) ,
where ϑ ^ n ( ρ 0 , γ 0 , β ) is any consistent estimator of ϑ ( ρ 0 , γ 0 , β ) = η 1 2 ( ρ 0 , γ 0 , β ) .
At the level of significance of α ( 0 , 1 ) , we reject H 0 whenever T n ( ρ 0 , γ 0 , β ) > Z α , where Z α is a ( 1 α ) -quantile of the standard Gaussian distribution.
In practice, ϑ ^ n ( ρ 0 , γ 0 , β ) can be taken as a natural estimator of η ^ n 1 2 ( ρ 0 , γ 0 , β ) with η ^ n ( ρ 0 , γ 0 , β ) =   j = 1 k + 1 α ^ j 1 h m p β j , h β j , m η ^ j , 2 ( h , m ) ( ρ 0 , γ 0 ) , and for j = 1 , , k + 1 ,   α ^ j is an estimator of α j = lim n + n j ( n ) / n and
η ^ j , 2 ( h , m ) ( ρ 0 , γ 0 ) = I ( f ) R d 1 V 2 ( x ) T γ j , h ( ρ 0 , γ 0 , x ) T γ j , m ( ρ 0 , γ 0 , x ) d F ^ j ( x ) ,
where F ^ j is the empirical distribution function of the observations with indices in [ τ j 1 , τ j ) . This can be written again as
η ^ j , 2 ( h , m ) ( ρ 0 , γ 0 ) = I ( f ) n j ( n ) τ j 1 τ j 1 1 V 2 ( X t 1 ) T γ j , h ( ρ 0 , γ 0 , X t 1 ) T γ j , m ( ρ 0 , γ 0 , X t 1 ) .
Theorem 2 (Optimality).
Assume that (A1)–(A10) hold. Then, for any given β R p ( k + 1 ) ,
[i
Under H 0 , as n + , T n ( ρ 0 , γ 0 , β ) D N ( 0 , 1 ) .
[ii
Under H β ( n ) , at the level of significance of α [ 0 , 1 ] , the asymptotic power of the test based on T n ( ρ 0 , γ 0 , β ) is
P k , τ k = 1 Φ ( z α ϑ ( ρ 0 , γ 0 , β ) ) ,
where z α is the ( 1 α ) -quantile of a standard Gaussian distribution with cumulative distribution function Φ.
[iii
The test based on T n ( ρ 0 , γ 0 , β ) is locally asymptotically optimal.
Proof. 
See Appendix A. □

3.2. The Parameters Are Unknown

Here, we place ourselves in the framework of Model (1) with ρ 0 unknown. We study the case in which γ 0 is known and the case in which it is unknown. We previously studied the asymptotic normality of an estimator of ρ 0 under H 0 and under H β ( n ) . For any t = 1 , , n , ρ R p and γ R p ( k + 1 ) , define
ε t ( ρ , γ , X t 1 ) = X t T ( ρ + γ ω ( t ) , X t 1 ) V ( X t 1 ) .
We consider the following additional assumptions:
( B 1
The model is identifiable, that is, for γ 1 , γ 2 R p ( k + 1 ) , γ 1 γ 2 T ( ρ 0 + γ 1 ω , x ) T ( ρ 0 + γ 2 ω , x ) , x R d , ω { 0 , 1 } k + 1 ,
( B 2
The true parameter ρ 0 has a consistent estimator ρ n that satisfies the Bahadur representation (see, e.g., [29]), given by
n 1 2 ( ρ n ρ 0 ) = n 1 2 t = 1 n Υ ( ρ 0 , X t 1 ) ( ε t ( ρ 0 , γ 0 ) ) + o P ( 1 ) ,
where
  • Υ ( x , ρ 0 ) = ( Υ 1 ( x , ρ 0 ) , , Υ p ( x , ρ 0 ) ) R p .
  • For any j = 1 , , k + 1 , x R d , ϱ 0 such that
    R d Υ ( ρ 0 , x ) 2 + ϱ d F j ( x ) < .
  • R ( x ) 2 + ϱ f ( x ) d x < and R ( x ) f ( x ) d x = 0 .
( B 3
For any j { 1 , , k + 1 } , h { 1 , , p } ,
max 1 p R d Υ ( ρ 0 , x ) V ( x ) T γ j , h ( ρ 0 , γ 0 , x ) d F j ( x ) < ,
( B 4
For any i = 1 , , k + 1 and j = 1 , , p , there exists a ball B ( r ) of radius r, such that
max sup ρ B ( r ) ρ T ( ρ , γ 0 , x ) p , sup ρ B ( r ) ρ T γ i , j ( ρ , γ 0 , x ) p , sup ρ B ( r ) ρ 2 T γ i , j ( ρ , γ 0 , x ) p χ ( x ) , for some positive function χ defined on R d ,
( B 5
For j = 1 , , k + 1 ,   = 1 , 2 , 3 and a , b { 0 , 1 , 2 , 3 } ,
λ a , b ( j ) = R d ν a ( x ) χ b ( x ) V ( x ) d F j ( x ) < .
Remark 2.
  • In the literature, one can find numbers of models with functions T ( ρ 0 , . ) satisfying ( B 4 ) and ( B 5 ).
  • Assumption ( B 1 ) is useful for the estimation of ρ 0 , while ( B 2 ) helps for the study of the distribution of the test statistic. It has been used before in [4]. It is satisfied by least-squares and likelihood type estimators for some usual models within (1).
Recall that, for any β R p ( k + 1 ) , under H 0 , the central sequence with the true parameter ρ 0 is denoted by Π n ( ρ 0 , γ 0 , β ) and its estimated version by Π n ( ρ n , γ 0 , β ) .
Proposition 1.
Under the assumptions (A1)–(A10) and ( B 1 )–( B 5 ), we have
[i
Under H 0 :
n ( ρ n ρ 0 ) D N ( 0 , Σ ) ,
[ii
Under H β ( n ) :
n ( ρ n ρ 0 ) D N ( C , Σ ) ,
where
C = R ( x ) ϕ f ( x ) f ( x ) d x j = 1 k + 1 α j h = 1 p β j , h R d Υ ( ρ 0 , x ) V ( x ) T γ j , h ( ρ 0 , x ) d F j ( x ) R p
and
Σ = R 2 ( x ) f ( x ) d x j = 1 k + 1 α j R d Υ ( ρ 0 , x ) Υ ( ρ 0 , x ) d F j ( x ) M p ( R ) .
Proof. 
See Appendix A. □

3.2.1. The Parameter γ 0 Is Known

As explained in [4], in practice, the case where the parameter γ 0 is known may be encountered when there is no apparent change, and one wishes to test for possible weak changes. That is the situation where γ 0 = 0 . This is what is usually tested in the literature. Recall that
η ( ρ 0 , γ 0 , β ) = j = 1 k + 1 α j 1 h m p β j , h β j , m η j , 2 ( h , m ) ( ρ 0 , γ 0 ) .
Note that, by our assumptions, the following real numbers
η j , 2 ( h , m ) ( ρ 0 , γ 0 ) = I ( f ) R d 1 V 2 ( x ) T γ j , h ( ρ 0 , γ 0 , x ) T γ j , m ( ρ 0 , γ 0 , x ) d F j ( x )
are finite. Furthermore, since for any j = 1 , , k + 1 ,   η j , 2 ( ρ 0 , γ 0 ) depends on ρ 0 and on F j , which itself depends on ρ 0 (which is unknown) and on γ 0 , we estimate it by η ^ j , 2 ( h , m ) ( ρ n , γ 0 ) , given by
η ^ j , 2 ( h , m ) ( ρ n , γ 0 ) = I ( f ) n j ( n ) t = τ j 1 τ j 1 1 V 2 ( x ) T γ j , h ( ρ n , γ 0 , x ) T γ j , m ( ρ n , γ 0 , x ) .
Despite the fact that we consider any consistent estimators of η ( ρ 0 , γ 0 , β ) and ϑ ( ρ 0 , γ 0 ) = η 1 2 ( ρ 0 , γ 0 , β ) , we will take them here to be, respectively,
η ^ n ( ρ n , γ 0 , β ) = j = 1 k + 1 α ^ j 1 h m p β j , h β j , m η ^ j , 2 ( h , m ) ( ρ n , γ 0 )
and
ϑ ^ n ( ρ n , γ 0 , β ) = η ^ n 1 2 ( ρ n , γ 0 , β ) ,
where for all j = 1 , , k + 1 ,   α ^ j is an estimator of α j which can be taken to be α ^ j = n j ( n ) / n .
Proposition 2.
Under the assumptions (A1)–(A10) and ( B 1 )–( B 5 ), for any n 0 , we have, for any sequence of positive integers s ( n ) such that n / s ( n ) 0 as n + ,
[i
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ s ( n ) , γ 0 , β ) + o P ( 1 ) ,
[ii
ϑ ^ n ( ρ n , γ 0 , β ) ϑ ( ρ 0 , γ 0 , β ) .
Proof. 
See Appendix A. □
In order to test H 0 against H β ( n ) , for any β R p ( k + 1 ) , we consider the following statistic:
T n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ s ( n ) , γ 0 , β ) ϑ ^ n ( ρ s ( n ) , γ 0 , β ) .
Theorem 3 (Optimality).
Assume that (A1)–(A10) and ( B 1 )–( B 5 ) hold. Then, for any given β R p ( k + 1 ) and for any sequence s ( n ) of positive integers such that, n / s ( n ) 0 as n + , we have the following:
[i
Under H 0 , as n + , T n ( ρ s ( n ) , γ 0 , β ) D N ( 0 , 1 ) .
[ii
Under H β ( n ) , at the level of significance α ] 0 , 1 [ , the asymptotic power of the test based on the statistic T n ( ρ s ( n ) , γ 0 , β ) is P k , τ k = 1 Φ ( z α ϑ ( ρ 0 , γ 0 , β ) ) , where z α is the ( 1 α ) -quantile of the standard Gaussian distribution with cumulative distribution function Φ.
[iii
The test based on the statistic T n ( ρ s ( n ) , γ 0 , β ) is locally asymptotically optimal.
Proof. 
See Appendix A. □

3.2.2. The Parameter γ 0 Is Unknown

In practice, γ 0 is generally unknown and has to be estimated, as well as γ n , where, for any β R p ( k + 1 ) , γ n = γ 0 + β / n . Many methods can be used to obtain consistent estimators of these parameters. Let γ ^ 0 , n be the maximum likelihood estimator of γ 0 under H 0 and let γ ^ n be the maximum likelihood estimator of γ n under H β ( n ) . Then, we easily have that in probability, asymptotically,
γ ^ n = γ ^ 0 , n + β / n .
The above equality allows for the study of the test statistics in the same lines as in the case where γ 0 is known.
We need the following assumptions:
( B 1
For any m = 1 , , k + 1 , h = 1 , , p , for T γ m , h ( ρ , γ , x ) = T γ m , h ( ρ , γ , x ) ,
max sup γ | T γ m , h ( ρ , γ , x ) | , sup γ γ T γ m , h ( ρ , γ , x ) p ( k + 1 ) ,
sup γ | | | γ 2 T γ m , h ( ρ , γ , x ) | | | p ( k + 1 ) < κ ( x ) ,
for some positive function κ defined on R d ,
( B 2
For j = 1 , , k + 1 , u = 1 , 2 , 3 and a , b { 0 , 1 , 2 , 3 } ,
δ a , b ( u ) = R d ν a ( x ) κ b ( x ) V u ( x ) d F j ( x ) < .
( B 3
n ( γ ^ 0 , n γ 0 ) = O P ( 1 ) .
Let s ( n ) be any sequence of positive integers satisfying n / s ( n ) 0 as n + . For testing H 0 against H β ( n ) , β R p ( k + 1 ) , we use the test based on the statistic
T n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) = Π n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) .
Proposition 3.
Assume that (A1)–(A10), ( B 1 )–( B 5 ) and ( B 1 )–( B 3 ) hold. Then, for any sequence s ( n ) of positive integers satisfying n / s ( n ) 0 as n + , for any sequence of consistent and asymptotically normal estimators { γ ^ 0 , n } n 1 of γ 0 and for any β R p ( k + 1 ) , we have, under H 0 and as n + ,
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) + o P ( 1 ) .
Proof. 
See Appendix A. □
Theorem 4 (Optimality).
Assume that (A1)–(A10), ( B 1 )–( B 5 ) and ( B 1 )–( B 3 ) hold. Then, for any given β R p ( k + 1 ) , we have
[i
Under H 0 , as n + , T n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) D N ( 0 , 1 ) .
[ii
Under H β ( n ) , at the level of significance α ] 0 , 1 [ , the asymptotic power of the test based on the statistic
T n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) is P k , τ k = 1 Φ ( z α ϑ ( ρ 0 , γ 0 , β ) ) , where z α is the ( 1 α ) -quantile of the standard Gaussian distribution with cumulative distribution function Φ.
[iii
The test based on the statistic T n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) is locally asymptotically optimal.
Proof. 
See Appendix A. □

4. Application to Detection of Change Points and Their Location Estimation

The time series at hand has jumps if the parameters of its distribution change at certain times. The current test, applied to the model (1) adjusted to this time series, for testing the null hypothesis of no change against at least one change, is conducted with a γ 0 whose components are all equal to, say, ρ 1 , the parameter of the stationary distribution on the first segment [ τ 0 , τ 1 ) : γ 0 = ( ρ 1 , ρ 1 , , ρ 1 ) .
The test constructed in this work can do more than testing no change against at least one change. To understand this, assume changes have been detected in the data by a given method, and their locations have been estimated. This test can serve as a screening method for finding possible missing changes by this method. In this situation, one can assume the changes already detected as well as their locations as known. With this, all the components of γ 0 will no longer be the same, and some of the τ j ’s in the model would be considered known. Thus, our test can be used for testing the null hypothesis of ι changes against at least ι + 1 changes, for some given ι N .
For any k 1 , denote by P ^ k , τ k any estimator of the local power P k , τ k of this test at τ k = ( τ 1 , , τ k ) , with the convention that P 0 , τ 0 = α , α ( 0 , 1 ) , the level of significance. Let ζ ( 0 , . 1 ) and X 1 , X 2 , , X m , ( m < < n ), the m first stationary observations.
Our procedure for detecting changes in the time series X 1 , X 2 , , X n and estimating their locations is described in the following algorithm.
  • Location 1:
      • (A1): Take any t between 1 and m + j , so that there is a large number of indices before and after t (for example t = [ ( m + j ) / 2 ] ).
      • Adjust Model (1) to X 1 , , X m + j with a potential change located at the time index t, and apply the testing procedure studied.
    • If | P ^ 1 , t P 0 , τ 0 | > ζ ,
      • Put τ 1 = m + j and go to Location 2 (first change location estimated)
    • Else
      • Carry out j = j + 1 and go to (A1).
  • Location 2:
    • Consider the next h observations to X τ 1 : X τ 1 + 1 , , X τ 1 + h
    • Put j = 1 and conduct the following:
      • (A2): Take any t between τ 1 and τ 1 + h + j so that there is a large number of indices before and after t.
      • Adjust Model (1) to X τ 1 , , X τ 1 + h + j with a potential change located at the time index t and apply the testing procedure studied.
  • If | P ^ 1 , t P 0 , τ 0 | > ζ ,
    • Put τ 2 = τ 1 + h + j and go to Location 3 (second change location estimated)
  • Else
    • Carry out j = j + 1 and go to (A2).
  • Location i:
    • Let τ i 1 be the change location at step i-1
    • Put j = 1 and perform
      • (Ai): Take any t between τ i 1 and τ i 1 + h + j so that there is a large number of indices before and after t.
      • Adjust Model (1) to X τ i 1 , , X τ i 1 + h + j with a potential change located at the time index t and apply the testing procedure studied
    • If | P ^ 1 , t P 0 , τ 0 | > ζ ,
      • Put τ i = τ i 1 + h + j , i = i + 1 and go to Location i (ith change location estimated)
    • Else
      • Carry out j = j + 1 and go to (Ai).
Note that simple P ^ k , τ k can be obtained by plugging estimators of the parameters into the expressions of the local power given in Theorems 2–4.

5. Simulation Experiment

In this section, the theoretical results are applied to simulated data, using the software R 4.2.0. We first study the power of the test as a function of the magnitude of the breaks when these are given. Next, we use the power for estimating the location of the breaks when they are no more assumed to be fixed. The results we present in the sequel are obtained for the nominal levels α = 1 % , 5 % , 10 % . Almost all the estimators in this section are computed from 5000 replications. We use the following particular CHARN model:
X t = ρ 0 , 1 + γ 0 j , 1 + β j , 1 n + ρ 0 , 2 + γ 0 j , 2 + β j , 2 n X t 1 e ρ 0 , 3 + γ 0 j , 3 + β j , 3 n X t 1 2 + ( θ 1 + θ 2 X t 1 2 ) 1 2 ε t , j = 1 , , k , t Z ,
where n denotes the number of observations, ( ε t ) t is a standard white noise with a differentiable density f. Here, on [ τ j 1 , τ j ) , ρ 0 = ( ρ 0 , 1 , ρ 0 , 2 , ρ 0 , 3 ) R 3 , γ 0 j = ( γ 0 j , 1 , γ 0 j , 2 , γ 0 j , 3 ) , β j = ( β j , 1 , β j , 2 , β j , 3 ) R 3 ; ρ 0 , θ 1 , θ 2 and γ 0 are parameters to be specified in each particular model considered.

5.1. Simulation Methodology

Our methodology for conducting the simulation experiment works as follows. For some given and fixed values of ρ 0 = ( ρ 0 , 1 , ρ 0 , 2 , ρ 0 , 3 ) and number of changes k, for 1 j k + 1 , we consider different values of the triplet β j = ( β j , 1 , β j , 2 , β j , 3 ) corresponding to the shift in the parameter on each interval [ τ j 1 , τ j [ , with β 1 = ( 0 , 0 , 0 ) (indicating no changes in the first interval). This provides us with n j ( n ) , j = 1 , , k + 1 observations in the j-th interval. Subsequently, we utilize the model (13) to simulate these observations, to which our algorithm (see Section 4) is then applied.

5.2. Power Study for Given Break Locations

5.2.1. γ 0 and f Are Known

Now, we treat a particular case of (13). We consider n = 100 ,   n 1 ( n ) = 40 , n 2 ( n ) = 60 , γ 0 = 0 , f is the standard Gaussian density, θ 1 = 1 , θ 2 = 0.02 , ρ 0 , 1 = 0.8 , ρ 0 , 2 = 0.2 , ρ 0 , 3 = β j , 3 = 0 , T ( ρ 0 + γ j ω j , x ) = ρ 0 , 1 + γ j , 1 ω j + ( ρ 0 , 2 + γ j , 2 ω j ) x , and for i = 1 , 2 , γ j , i = γ 0 j , i + β j , i / n with β j , i [ 10 , 10 ] , V ( x ) = 1 + 0.02 x 2 , T / γ j , 1 ( ρ 0 , γ 0 , x ) = 1 and T / γ j , 2 ( ρ 0 , γ 0 , x ) = x . For j = 1 , 2 , to study the behavior of the power as a function of magnitudes of the changes, we fix one component of each β j and we compute the power of the test as a function of the other components. The results are plotted on Figure 1, where one can observe that the power grows quickly to one as the norm of the magnitude grows.

5.2.2. γ 0 and f Are Unknown

As we said before, in practice, γ 0 and f are unknown and must be estimated. While we estimated γ 0 by the least-squares method, we estimate f by the Parzen–Rosenbatt estimator (see [30]), defined by
f ^ n ( x ) = 1 n h n 2 t = 1 n K x ε ^ t h n , x R ,
where
ε ^ t = X t T ( ρ ^ n + γ ^ n ω ( t ) , X t 1 ) V ( X t 1 ) ,
where ρ ^ n and γ ^ n denote the least-squares estimators of ρ 0 and γ 0 , respectively, h n is the smoothing parameter, and K the kernel symmetric function having the following properties:
1.
K ( x ) > 0 for any x R (positivity).
2.
R K ( x ) d x = 1 (density).
3.
R x K ( x ) d x = 0 (by symmetry).
Now, we calculate the power of the test using these estimators. We choose a Gaussian kernel K and h n σ n n 1 5 , with σ n being the sample standard deviation. We consider a sample of n = 100 observations, n 1 ( n ) = 40 , n 2 ( n ) = 60 and we generate the observations from (13). The results for ρ 0 , 1 = 0.8 , ρ 0 , 2 = 0.2 , ρ 0 , 3 = 0 , θ 1 = 1 and θ 2 = 0.02 at the level of significance 5 % are given in Figure 2. It is clear that the local power of the test has approximately the same behavior for the standard Gaussian and the standard Student densities. The results do not change significantly for Epanechnikov, uniform kernel, quadratic kernel, etc.

5.3. Detection of Change Points and Estimation of Their Locations

In this subsection, we detect change points and we estimate their locations in simulated data. Ref. [4] studied the case of changes only in ρ 0 , 1 . We start by evaluating the power of the test in case of no break in the data. Next, we study changes in ρ 0 , 2 . Finally, we study changes in ρ 0 , 1 and ρ 0 , 2 simultaneously.

5.3.1. No Break

Following the algorithm in Section 4, we start by calculating the asymptotic local power given by (7) in the case where there is no break, that is, for k = 0 . We consider a sample of n = 200 observations generated from Model (13), for γ 0 = 0 and f, a standard Gaussian density.
For ρ 0 , 1 = 0.5 , ρ 0 , 2 = ρ 0 , 3 = θ 2 = 0 and θ 1 = 1 , the asymptotic local power of our test with different levels of significance is plotted on Figure 3. We can see there that the local power does not exceed 0.1012 when α = 10 % and does not exceed 0.0507 when α = 5 % . Then, for any α = 5 % , or 10 % , for the thresholds corresponding to ζ = 0.002 and 0.0008 , respectively, we keep the null hypothesis and conclude that there is no break in the data.

5.3.2. Case of One Single Break

Here, we consider the problem of detecting one single break when it happens jointly in ρ 0 , 1 and ρ 0 , 2 . For α = 5 % , n = 200 , ρ 0 , 1 = 0.5 , ρ 0 , 2 = 0.2 and ρ 0 , 3 = β 1 , 3 = 0 , for different values of τ 1 and β 1 = ( β 1 , 1 , β 1 , 2 ) ; the estimation of the break location, as well as the root mean square error (RMSE), is presented in Table 1. One can see from this table that the estimation is accurate and that the RMSE is large for smaller | | β | | .

5.4. Case of Three Breaks ( k = 3 )

Now, we study the case of three breaks when piece-wise models AR(1) and AR(1)-ARCH(1) are adjusted to the data. Note that these models are sub-classes of CHARN(1,1) models.

5.4.1. AR(1) Models

We start with AR(1) models. For τ = ( τ 1 , τ 2 , τ 3 ) , the data are obtained from (13); for j = 1 , 2 , 3 , ρ 0 , 3 = β j , 3 = 0 , θ 2 = 0 , θ 1 = 1 , and ( ε t ) is a sequence of standard Gaussian white noise. The number of change points is assumed to be unknown, and we aim to detect them and estimate their locations using our theoretical results and following our algorithm. For 5000 replications, n = 400 , ρ 0 = ( ρ 0 , 1 , ρ 0 , 2 ) = ( 0.2 , 0.3 ) ,   τ = ( τ 1 , τ 2 , τ 3 ) = ( 90 , 190 , 275 ) , for different values of the magnitude of change β j = ( β j , 1 , β j , 2 ) , j = 1 , 2 , 3 , and for the same threshold ζ = 0.1 % , the estimations obtained are displayed in Table 2 together with their associated RMSE (in brackets). These results seem to show that our method tends to estimate the correct number of changes but overestimates their locations with a relatively large RMSE when the jumps in the parameters of the AR(1) models are too small. Again for an AR(1) model, for j = 1 , 2 , 3 , we fix β j = ( ( 3 , 2 ) , ( 1 , 3 ) , ( 1 , 1 ) ) , and the instants of breaks τ = ( 90 , 190 , 275 ) , and we monitor the corresponding break estimates with respect to the variation in the threshold corresponding to different ζ . For ζ = 0.07 % , our method overestimates the number of changes (six instants detected). For ζ = 0.1 % and 0.15 % , it estimates the correct number of changes but overestimates their locations. For ζ = 0.25 % , it underestimates the number of changes and overestimates their locations. The overestimation of the break locations may be explained by the weakness of the magnitude of changes. If we consider the same study for β = ( ( 5 , 3 ) , ( 1 2 ) , ( 4 , 6 ) ) , we obtain the same results for the number of changes, but with more accurate change location estimations.

5.4.2. AR(1)-ARCH(1) Models

Here, we consider Model (13) for ρ 0 , 1 = 0.2 , ρ 0 , 2 = 0.3 , β 1 = ( 3 , 2 ) , β 2 = ( 1 , 1 ) , β 3 = ( 2 , 4 ) , θ 1 = 1 , θ 2 = 0.02 and ρ 0 , 3 = β j , 3 = 0 , j = 1 , 2 , 3 , which leads to an AR(1)-ARCH(1) model. For n = 350 ,  Table 3 shows the estimation of change locations corresponding to the same ζ and different magnitudes of change. We can see that our method estimates the correct number of changes but overestimates their locations with a relatively large RMSE.

5.4.3. Conclusions

Based on the previous simulation results, we can conclude that our method is sensitive to the choice of ζ , and is efficient in detecting weak changes and estimating their locations in an AR(1)-ARCH(1) model we have considered when the magnitudes of the changes are not too small.

5.5. Comparison with [27]

In a class of shifted models, Ref. [28] performed a comparison between her method, which is a different case from ours, and other methods including the one of [27]. She concluded that her method is more efficient for estimating weak break locations.
In this section, we compare our method to that of [27], denoted by SCUSUM, for a class of more general models. Recalling Model (13), we consider many cases of one single break corresponding to different instants τ 1 , and we take ρ 0 , 3 = β 1 , 3 = θ 2 = 0 and θ 1 = 1 . For n = 200 ,   α = 5 % , ρ 0 , 1 = 0.5 , ρ 0 , 2 = 0.2 and different values of β 1 = ( β 1 , 1 , β 1 , 2 ) we perform 1000 replications, and at each replication, the change location is estimated by SCUSUM and by our method. Table 4 shows the results obtained.
For most of the 1000 replications, SCUSUM was not able to detect any change. For that reason, we kept only the cases where it detected a change, and we calculatec the mean of the change locations estimated. The results are displayed in Table 4, from which it is obvious that our method is more accurate than SCUSUM for the detection of weak changes in the parameters of the AR(1) model studied.

5.6. Application to Real Data

Here, we applied our methodology to detecting changes in the log S&P stock price data obtained from the website https://finance.yahoo.com/quote/, accessed on 1 May 2024. These daily data cover the period from January 1992 to December 2000 and represent one of the most closely followed stock market indices worldwide, serving as a significant indicator of the U.S. economy. The raw data exhibit a trend, which shows that the S&P 500 index is non-stationary (see Figure 4). With this, our methodology can not be directly applied to this series.
Let P t denote the S&P 500 stock price index on day t, and define X t as
X t = log P t P t 1
The function log being monotonic implies that the change-point locations in ( P t ) , ( log ( P t ) ) , and ( X t ) are identical. Graphically (refer to Figure 5), X t appears to be approximately piece-wise stationary over a finite number of segments, which aligns with the requirements for applying our methodology to study changes in the raw data.
To accommodate these characteristics, we adjust the CHARN model X t = β j / n + θ j ε t within each segment [ τ j , τ j + 1 ) , where ε t N ( 0 , 1 ) . The Gaussian assumption is validated by applying the Shapiro–Wilk test.
Then, applying our procedure to this model, we obtained the following break location dates: 1992-11-11, 1994-03-03, 1995-02-14, 1996-07-11, 1997-07-14, 1998-02-09, 1998-06-22, 1998-11-02, 1999-03-17, and 1999-10-13. The changes occurring in 1992 can be linked to the damage caused by the hurricane Andrew or by the Europian Monetary System crisis. The one in 1994 can be associated with the U.S. lifting of the trade embargo on Vietnam. Those in 1995 can be due to the bankruptcy of the Barings bank. That in 1997 may be associated with the Asian crisis. Those in 1998 may be connected to the rescue organized by the New York Federal Reverve Bank. Finally, those of 1999 can be associated with the cancellation of the 1933 Glass–Steagall Act by the so-called Grammi–Leachi–Bliley Act.

6. Conclusions

We generalized the work of [4] to a class of more general CHARN models. We studied weak breaks in the parameters of the function T when the function V and the parameters ρ 0 and γ 0 are known. We established a LAN and contiguity results. We given an explicit expression of the local power of the test.
Next, we studied the case where ρ 0 is unknown and γ 0 is known or unknown. We estimated these parameters, and proved the convergence of the central sequence based on the estimated parameters to the one based on the true parameters. In this case, we proved that the test remains optimal if we replace the parameters with their estimators. From these results, we used the theoretical power for detecting weak breaks and estimating their locations in time series through an algorithm that we constructed.
The simulation experiment conducted shows that our method can detect weak breaks in the parameters of the linear AR(1) and the non-linear AR(1)-ARCH(1) models considered. Also, the location of the breaks as well as their number can be accurately estimated when the magnitudes are not too small.
Compared to [27], it seems to be more efficient for estimating weak break locations. Sometimes, the method in [27] detects breaks in data simulated with no break. This did not happen with our method when we chose a suitable ζ . Our method was also applied to a set of financial data.

Author Contributions

Methodology, Y.S. and J.N.-W.; Software, Y.S.; Validation, J.N.-W. and Z.K.; Investigation, Y.S.; Writing—original draft, Y.S., J.N.-W. and Z.K.; Writing—review & editing, J.N.-W.; Visualization, Z.K.; Supervision, J.N.-W. and Z.K.; Project administration, J.N.-W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors have no conflicts of interest to declare.

Appendix A. Proofs

This section provides the proofs of the results stated in the preceding sections.

Appendix A.1. Proof of Theorem 1

For any β R p ( k + 1 ) , the log-likelihood ratio of H 0 against H β ( n ) is given by
Θ n ( ρ 0 , γ 0 , β ) = t = 1 n log [ f ( ε t ( ρ 0 , γ n ) ) ] log [ f ( ε t ( ρ 0 , γ 0 ) ) ] .
First, we show that as n + , Θ n ( ρ 0 , γ 0 , β ) decomposes into
Θ n ( ρ 0 , γ 0 , β ) = Π n Δ n + o P ( 1 ) ,
where
Δ n = 1 2 n t = 1 n { 1 V 2 ( X t 1 ) β M ( γ 0 , X t 1 ) β ϕ f [ ε t ( ρ 0 , γ 0 ) ]
1 V ( X t 1 ) β H ( γ 0 , X t 1 ) β ϕ f [ ε t ( ρ 0 , γ 0 ) ] } ,
Π n = 1 n t = 1 n 1 V ( X t 1 ) β N ( γ 0 , X t 1 ) ϕ f [ ε t ( ρ 0 , γ 0 ) ] ,
and
  • N ( γ , X t 1 ) = ω 1 ( t ) γ 1 [ T ( ρ 0 , γ , X t 1 ) ] , , ω k + 1 ( t ) γ k + 1 [ T ( ρ 0 , γ , X t 1 ) ] R p ( k + 1 ) , and for i = 1 , , k + 1 ,   ω i { 0 , 1 } ,
  • M ( γ , X t 1 ) = M 1 ( γ , X t 1 ) 0 0 0 M 2 ( γ , X t 1 ) 0 0 0 M k + 1 ( γ , X t 1 ) M p ( k + 1 ) ( R ) , where 0 M p ( R ) is a null matrix and for any i = 1 , , k + 1 ,
    M i ( γ , X t 1 ) = ω i 2 ( t ) T γ i , 1 ( ρ 0 , γ , X t 1 ) T γ i , 1 ( ρ 0 , γ , X t 1 ) T γ i , 1 ( ρ 0 , γ , X t 1 ) T γ i , p ( ρ 0 , γ , X t 1 ) T γ i , p ( ρ 0 , γ , X t 1 ) T γ i , 1 ( ρ 0 , γ , X t 1 ) T γ i , p ( ρ 0 , γ , X t 1 ) T γ i , p ( ρ 0 , γ , X t 1 ) M p ( R ) ,
  • H ( γ , X t 1 ) = H 1 ( γ , X t 1 ) 0 0 0 H 2 ( γ , X t 1 ) 0 0 0 H k + 1 ( γ , X t 1 ) M p ( k + 1 ) ( R ) , where, for i = 1 , , k + 1 ,
    H i ( γ , X t 1 ) = ω i 2 ( t ) 2 T γ i , 1 2 ( ρ 0 , γ , X t 1 ) 2 T γ i , p γ i , 1 ( ρ 0 , γ , X t 1 ) 2 T γ i , 1 γ i , p ( ρ 0 , γ , X t 1 ) 2 T γ i , p 2 ( ρ 0 , γ , X t 1 ) M p ( R ) .
Applying a first-order Taylor expansion on log f [ ε t ( ρ 0 , γ ) ] in a neighborhood of γ 0 , we obtain, for some γ ˜ lying between γ 0 and γ n ,
log f [ ε t ρ 0 , γ n ] log f ε t ρ 0 , γ 0 = γ n γ 0 D γ [ log f { ε t γ } ] | γ = γ 0 + 1 2 γ n γ 0 H γ log ( f { ε t γ } ) | γ = γ ˜ γ n γ 0 .
To simplify the study, we calculate all the expressions we need.
  • D γ ε t ( ρ 0 , γ ) = 1 V ( X t 1 ) N ( γ , X t 1 ) ,
  • H γ ε t ( ρ 0 , γ ) = 1 V ( X t 1 ) H γ T ( ρ 0 , γ , X t 1 ) ,
  • D γ log f ε t ( ρ 0 , γ ) = 1 V ( X t 1 ) N ( γ , X t 1 ) ϕ f [ ε t ( ρ 0 , γ ) ] = D γ ε t ( ρ 0 , γ ) ϕ f [ ε t ( ρ 0 , γ ) ] ,
  • H γ log f ε t ( ρ 0 , γ ) = 1 V 2 ( X t 1 ) M ( γ , X t 1 ) ϕ f [ ε t ( ρ 0 , γ ) ] + 1 V ( X t 1 ) H γ T ( ρ 0 , γ , X t 1 ) ϕ f [ ε t ( ρ 0 , γ ) ] = 1 V 2 ( X t 1 ) M ( γ , X t 1 ) ϕ f [ ε t ( ρ 0 , γ ) ] H γ ε t ( ρ 0 , γ ) ϕ f [ ε t ( ρ 0 , γ ) ] .
Then,
log f [ ε t ρ 0 , γ n ] log f ε t ρ 0 , γ 0 = 1 n β N ( γ 0 , X t 1 ) V ( X t 1 ) ϕ f [ ε t ( ρ 0 , γ 0 ) ] 1 2 n 1 V 2 ( X t 1 ) β M ( γ ˜ , X t 1 ) β ϕ f [ ε t ( ρ 0 , γ ˜ ) ] 1 2 n 1 V ( X t 1 ) β H γ T ( ρ 0 , γ ˜ , X t 1 ) β ϕ f [ ε t ( ρ 0 , γ ˜ ) ] .
Now,
Θ n ( ρ 0 , γ 0 , β ) = 1 n t = 1 n 1 V ( X t 1 ) β N ( γ 0 , X t 1 ) ϕ f [ ε t ( ρ 0 , γ 0 ) ] 1 2 n t = 1 n 1 V 2 ( X t 1 ) β M ( γ ˜ , X t 1 ) β ϕ f [ ε t ( ρ 0 , γ ˜ ) ] + 1 2 n t = 1 n 1 V ( X t 1 ) β H ( γ ˜ , X t 1 ) β ϕ f [ ε t ( ρ 0 , γ ˜ ) ] ,
where, observing that for any t and for any i j , ω i ( t ) ω j ( t ) = 0 ,
M ( γ ˜ , X t 1 ) = M 1 ( γ ˜ , X t 1 ) 0 0 0 M 2 ( γ ˜ , X t 1 ) 0 0 0 M k + 1 ( γ ˜ , X t 1 ) M p ( k + 1 ) ( R ) ,
with 0 M p ( R ) standing for a null matrix and
M i ( γ ˜ , X t 1 ) = ω i 2 ( t ) T γ i , 1 ( ρ 0 , γ ˜ , X t 1 ) T γ i , 1 ( ρ 0 , γ ˜ , X t 1 ) T γ i , 1 ( ρ 0 , γ ˜ , X t 1 ) T γ i , p ( ρ 0 , γ ˜ , X t 1 ) T γ i , p ( ρ 0 , γ ˜ , X t 1 ) T γ i , 1 ( ρ 0 , γ ˜ , X t 1 ) T γ i , p ( ρ 0 , γ ˜ , X t 1 ) T γ i , p ( ρ 0 , γ ˜ , X t 1 ) M p ( R ) ,
H ( γ ˜ , X t 1 ) = H 1 ( γ ˜ , X t 1 ) 0 0 0 H 2 ( γ ˜ , X t 1 ) 0 0 0 H k + 1 ( γ ˜ , X t 1 ) M p ( k + 1 ) ( R ) ,
with
H i ( γ ˜ , X t 1 ) = ω i 2 ( t ) 2 T γ i , 1 2 ( ρ 0 , γ ˜ , X t 1 ) 2 T γ i , p γ i , 1 ( ρ 0 , γ ˜ , X t 1 ) 2 T γ i , 1 γ i , p ( ρ 0 , γ ˜ , X t 1 ) 2 T γ i , p 2 ( ρ 0 , γ ˜ , X t 1 ) M p ( R ) .
Let
χ ( γ ˜ , X t 1 ) = 1 2 n t = 1 n 1 V 2 ( X t 1 ) β M ( γ ˜ , X t 1 ) β ϕ f [ ε t ( ρ 0 , γ ˜ ) ] + 1 2 n t = 1 n 1 V ( X t 1 ) β H ( γ ˜ , X t 1 ) β ϕ f [ ε t ( ρ 0 , γ ˜ ) ] .
Using (A3), we have E { ϕ f [ ε t ( ρ 0 , γ ) ] } = 0 , E { ϕ f [ ε t ( ρ 0 , γ ) ] } = I ( f ) and E { | ϕ f [ ε t ( ρ 0 , γ ) ] | } < . Now, using (A4) and the ergodic theorem, from a simple calculation, we can prove that
χ ( γ ˜ , X t 1 ) χ ( γ 0 , X t 1 ) n + P 0 .
It results from above that
Θ n ( ρ 0 , γ 0 , β ) = Π n Δ n + o P ( 1 ) ,
where Δ n and Π n are defined by (A2) and (A3), respectively.
Now, we study of the asymptotic behavior of Δ n under H 0 .
By the piecewise stationarity and ergodicity, for any j = 1 , , k + 1 , we can write, almost surely,
lim n + Δ n = 1 2 j = 1 k + 1 α j 1 h m p β j , h β j , m η j , 2 ( h , m ) ( ρ 0 , γ 0 ) = η ( ρ 0 , γ 0 , β ) 2 ,
with
η j , 2 ( h , m ) ( ρ 0 , γ 0 ) = I ( f ) R d 1 V 2 ( x ) T γ j , h ( ρ 0 , γ 0 , x ) T γ j , m ( ρ 0 , γ 0 , x ) d F j ( x ) .
Thus, we can write
Θ n = Π n η ( ρ 0 , γ 0 , β ) 2 + o P ( 1 ) .
Now, we prove that under H 0 ,
Π n D N ( 0 , η ( ρ 0 , γ 0 , β ) ) .
We consider the sequence
Θ n , j = 1 n t = 1 j 1 V ( X t 1 ) β N ( γ 0 , X t 1 ) ϕ f [ ε t ( ρ 0 , γ 0 ) ] , j = 1 , , n ,
and we define for every t = 1 , , n ,
ψ n , t = 1 n 1 V ( X t 1 ) β N ( γ 0 , X t 1 ) ϕ f [ ε t ( ρ 0 , γ 0 ) ] .
We use Corollary 3.1 of [31] to study the asymptotic behavior of Θ n , j .
It is easy to prove that { ( Θ n , j , F j ) , j = 1 , , n is a martingale sequence.
Using the fact that ε t is independent of F t 1 for t = 1 , , n and using the ergodic theorem, we can show that, almost surely,
lim n + t = 1 n E ( ψ n , t 2 | F t 1 ) = j = 1 k + 1 α j 1 h m p β j , h β j , m η j , 2 ( h , m ) ( ρ 0 , γ 0 ) = η ( ρ 0 , γ 0 , β ) < ,
which shows that the first condition of Corollary 3.1 of [31] is verified. It remains to check the Linderberg condition. Let ϵ > 0 ; by the Hölder inequality, using Markov inequality and the ergodic theorem, we can write that as n ,
t = 1 n E ( ψ n , t 2 1 | ψ n , t | > ϵ | F t 1 ) a . s 0 .
Then, the conditions of Corollary 3.1 of [31] are completely verified, so that under H 0 , we have
Π n ( ρ 0 , γ 0 , β ) D N ( 0 , η ( ρ 0 , γ 0 , β ) ) .
Consequently, under H 0 , we have
Θ n ( ρ 0 , γ 0 , β ) D N η ( ρ 0 , γ 0 , β ) 2 , η ( ρ 0 , γ 0 , β ) .
Collecting the above results, the LAN property is established with the central sequence Π n ( ρ 0 , γ 0 , β ) .

Appendix A.2. Proof of Corollary 1

For any β R p ( k + 1 ) , from Theorem 1, under H 0 , as n + ,
Π n ( ρ 0 , γ 0 , β ) D N ( 0 , η ( ρ 0 , γ 0 , β ) ) .
It results that, under H 0 , as n + ,
Θ n ( ρ 0 , γ 0 , β ) D N η ( ρ 0 , γ 0 , β ) 2 , η ( ρ 0 , γ 0 , β ) .
Then, it is easy to see that under H 0 , as n + ,
Π n ( γ 0 , β ) Θ n ( ρ 0 , γ 0 , β ) D N 0 η ( ρ 0 , γ 0 , β ) 2 , η ( ρ 0 , γ 0 , β ) σ 1 , 2 σ 2 , 1 η ( ρ 0 , γ 0 , β ) ,
where σ 1 , 2 = σ 2 , 1 = lim n + C o v ( Π n , Θ n ) = lim n + E ( Π n Θ n ) E ( Π n ) E ( Θ n ) .
Since E ( Π n ) = 0 and
lim n + E Θ n Π n = lim n + E Π n 2 E Π n Δ n = η ( ρ 0 , γ 0 , β ) ,
under H 0 , we have
Π n ( ρ 0 , γ 0 , β ) Θ n D N 0 η ( ρ 0 , γ 0 , β ) 2 , η ( ρ 0 , γ 0 , β ) η ( ρ 0 , γ 0 , β ) η ( ρ 0 , γ 0 , β ) η ( ρ 0 , γ 0 , β ) .
Using [32] or [33], we obtain that the sequences { H β ( n ) : n 1 } and { H 0 ( n ) = H 0 : n 1 } are contiguous, and that under H β ( n ) , as n + ,
Π n ( ρ 0 , γ 0 , β ) D N ( η ( ρ 0 , γ 0 , β ) , η ( ρ 0 , γ 0 , β ) ) .

Appendix A.3. Proof of Theorem 2

From Theorem 1 and Corollary 1, we can conclude immediately that, under H 0 , as n + ,
Π n ( ρ 0 , γ 0 , β ) Θ n D N 0 η ( ρ 0 , γ 0 , β ) 2 , η ( ρ 0 , γ 0 , β ) η ( ρ 0 , γ 0 , β ) η ( ρ 0 , γ 0 , β ) η ( ρ 0 , γ 0 , β ) .
Part [i] is a direct consequence of Theorem 2 and is briefly explained in the proof of Corollary 1.
As explained there, the sequences of hypotheses are contiguous, and under H β ( n ) , as n + , we have
Π n ( ρ 0 , γ 0 , β ) D N ( η ( ρ 0 , γ 0 , β ) , η ( ρ 0 , γ 0 , β ) ) .
By (A6) and the Le Cam’s third lemma (Proposition 4.2 in [32]), under H β ( n ) , as n + ,
Π n ( ρ 0 , γ 0 , β ) D N ( η ( ρ 0 , γ 0 , β ) , η ( ρ 0 , γ 0 , β ) ) .
We recall that, under H 0 , as n + ,
ϑ ^ n ( ρ 0 , γ 0 , β ) ϑ ( ρ 0 , γ 0 , β ) ,
where ϑ ( ρ 0 , γ 0 , β ) = η ( ρ 0 , γ 0 , β ) .
This convergence remains true under H β ( n ) by contiguity. From Theorem 2, it can be seen that, as n + , under H 0 ,
T n ( ρ 0 , γ 0 , β ) D N ( 0 , 1 ) .
Thus, by the Le Cam’s third lemma, we can conclude that under H β ( n ) , as n + ,
Π n ( ρ 0 , γ 0 , β ) ϑ ^ n ( ρ 0 , γ 0 , β ) D N ( ϑ ( ρ 0 , γ 0 , β ) , 1 ) .
Indeed, for n 1 , we can write
Π n ( ρ 0 , γ 0 , β ) ϑ ^ n ( ρ 0 , γ 0 , β ) = Π n ( ρ 0 , γ 0 , β ) ϑ ( ρ 0 , γ 0 , β ) × ϑ ( ρ 0 , γ 0 , β ) ϑ ^ n ( ρ 0 , γ 0 , β ) .
From which it results that, under H β ( n ) and as n + ,
Π n ( ρ 0 , γ 0 , β ) ϑ ( ρ 0 , γ 0 , β ) D N ( ϑ ( ρ 0 , γ 0 , β ) , 1 ) .
For parts [ii] and [iii], to calculate the asymptotic power of our test statistic, we calculate the asymptotic cumulative distribution of Π n ( ρ 0 , γ 0 , β ) ϑ ^ n ( ρ 0 , γ 0 , β ) under H β ( n ) . We have
lim n + P Π n ( ρ 0 , γ 0 , β ) ϑ ^ n ( ρ 0 , γ 0 , β ) > z α | H β ( n ) = lim n + P Π n ( ρ 0 , γ 0 , β ) ϑ ( ρ 0 , γ 0 , β ) > z α | H β ( n ) = 1 Φ ( z α ϑ ( ρ 0 , γ 0 , β ) ) = P k , τ k ,
where Φ is the cumulative distribution function of a standard Gaussian law with z α its ( 1 α ) -quantile.
By Section 4.4.3 of [33], the test based on T n ( ρ 0 , γ 0 , β ) is locally asymptotically optimal.

Appendix A.4. Proof of Proposition 1

Appendix A.4.1. Proof of [i]

From the Bahadur representation (9), as in [34,35], we consider the following sequence:
R n , j = n 1 2 t = 1 j Υ ( ρ 0 , X t 1 ) [ ε t ( ρ 0 , γ 0 ) ] , j = 1 , , n .
and
u R n , j = 1 n t = 1 j y t ( u ) R , where y t ( u ) = u Υ ( ρ 0 , X t 1 ) [ ε t ( ρ 0 , γ 0 ) ] , u R p .
It is easy to see that u R n , j is centered for every j = 1 , , n . Since E { [ ε t ( ρ 0 , γ 0 ) ] } = 0 for any t N , from a simple calculation, we prove that { ( u R n , j , F j ) , j = 1 , , n } is a martingale sequence.
We check now the first condition of Corollary 3.1 of [31]. Since ε t is independent of F t 1 for t = 1 , , n , we can write
t = 1 n E n 1 2 y t ( u ) 2 | F t 1 = t = 1 n n 1 E u Υ ( ρ 0 , X t 1 ) [ ε t ( ρ 0 , γ 0 ) ] 2 | F t 1 = j = 1 k + 1 n j ( n ) n 1 n j ( n ) t = τ j 1 τ j u Υ ( ρ 0 , X t 1 ) 2 E 2 [ ε t ( ρ 0 , γ 0 ) ] .
By the assumptions ( B 2 ), ( B 3 ) and the ergodic theorem, for j = 1 , , k + 1 , we can write
1 n j ( n ) t = τ j 1 τ j u Υ ( ρ 0 , X t 1 ) 2 E 2 [ ε t ( ρ 0 , γ 0 ) ] a . s . R d u Υ ( ρ 0 , x ) 2 d F j ( x ) × R 2 ( x ) f ( x ) d x < .
Then,
t = 1 n E { [ 1 n y t ( u ) ] 2 | F t 1 } n + a . s . s = R 2 ( x ) f ( x ) d x × j = 1 k + 1 α j R d u Υ ( ρ 0 , x ) 2 d F j ( x ) .
Finally, we check the Linderberg condition, that is, the second condition of Corollary 3.1 of [31]. In this purpose, we prove that, as n + ,
t = 1 n E n 1 2 y t ( u ) 2 1 n 1 2 y t ( u ) > ϵ | F t 1 a . s . 0 .
Let ϵ > 0 . By the Hölder inequality, and Markov inequalities, we can write
t = 1 n E n 1 2 y t ( u ) 2 1 n 1 2 y t ( u ) > ϵ | F t 1 t = 1 n E 2 3 n 1 2 y t ( u ) 3 | F t 1 E 1 3 1 n 1 2 y t ( u ) > ϵ | F t 1 t = 1 n E 2 3 n 1 2 y t ( u ) 3 | F t 1 E 1 3 n 1 2 y t ( u ) 3 > ϵ | F t 1 ϵ 3 1 ϵ 3 n j = 1 k + 1 n j ( n ) n 1 n j ( n ) t = τ j 1 τ j u Υ ( ρ 0 , X t 1 ) 3 E | [ ε t ( ρ 0 , γ 0 ) ] | 3 .
By the piece-wise stationarity and the ergodic theorem, for j = 1 , , k + 1 , we obtain almost surely that
lim n + 1 n j = 1 k + 1 n j ( n ) n 1 n j ( n ) t = τ j 1 τ j u Υ ( ρ 0 , X t 1 ) 3 E | [ ε t ( ρ 0 , γ 0 ) ] | 3 = 0
Then, using Corollary 3.1 of [31], we conclude that, under H 0 , we have
u n ( ρ n ρ 0 ) D N ( 0 , u Σ u ) .
which implies that, under H 0 and as n + ,
n ( ρ n ρ 0 ) D N ( 0 , Σ )
where Σ is the covariance matrix defined as
Σ = R 2 ( x ) f ( x ) d x j = 1 k + 1 α j R d Υ ( ρ 0 , x ) Υ ( ρ 0 , x ) d F j ( x ) M p ( R ) .

Appendix A.4.2. Proof of [ii]

We recall that under H 0 , as n + ,
n ( ρ n ρ 0 ) a . s . N ( 0 , Σ ) , Θ n ( ρ 0 , γ 0 , β ) D N η ( ρ 0 , γ 0 , β ) 2 , η ( ρ 0 , γ 0 , β ) ,
where
η ( ρ 0 , γ 0 , β ) = j = 1 k + 1 α j 1 h m p β j , h β j , m η j , 2 ( h , m ) ( ρ 0 , γ 0 ) ,
with
η j , 2 ( h , m ) ( ρ 0 , γ 0 ) = I ( f ) R d 1 V 2 ( x ) T γ j , h ( ρ 0 , γ 0 , x ) T γ j , m ( ρ 0 , γ 0 , x ) d F j ( x ) .
We consider the sequence Q n = n ( ρ n ρ 0 ) . By Le Cam’s third lemma, under H 0 , as n + ,
Q n Θ n ( ρ 0 , γ 0 , β ) D N 0 η ( ρ 0 , γ 0 , β ) 2 , Ξ ,
where
Ξ = lim n + V a r ( Q n ) C o v ( Q n , Θ n ) C o v ( Q n , Θ n ) V a r ( Θ n ) , C o v ( Q n , Θ n ( ρ 0 , γ 0 , β ) = C o v Q n , Π n ( ρ 0 , γ 0 , β ) η ( ρ 0 , γ 0 , β ) 2 = 1 n t = 1 n [ E Q n V ( X t 1 ) β N ( γ 0 , X t 1 ) ϕ f [ ε t ( ρ 0 , γ 0 ) ] E ( Q n ) E 1 V ( X t 1 ) β N ( γ 0 , X t 1 ) E ϕ f [ ε t ( ρ 0 , γ 0 ) ] ] .
Since E { ϕ f [ ε t ( ρ 0 , γ 0 ) ] } = 0 , lim n + E ( Q n ) = 0 , ε t is independent of F t 1 , E { [ ε t ( ρ 0 , γ 0 ) ] } = 0 , and using the stationarity and the ergodic theorem, we can easily see that
C o v ( Q n , Θ n ( ρ 0 , γ 0 , β ) n + C ,
where
C = R ( x ) ϕ f ( x ) f ( x ) d x j = 1 n α j ω j h = 1 p β j , h R d Υ ( ρ 0 , x ) V ( x ) T γ j , h ( ρ 0 , x ) d F j ( x ) .
Then, under H 0 , we have
Q n Θ n ( ρ 0 , γ 0 , β ) D N 0 η ( ρ 0 , γ 0 , β ) 2 , Σ C C η ( ρ 0 , γ 0 , β ) .
From this result and Le Cam’s third lemma, under H 1 ( n ) , as n tends to + , we have
n ( ρ n ρ 0 ) = Q n D N ( C , Σ ) .

Appendix A.5. Proof of Proposition 2

Appendix A.5.1. Proof of [i]

We prove the convergence of the central sequence (4) to its estimated version in order to verify that the test still be optimal when we replace the parameter by its estimator. For any ρ R p and γ , β R p ( k + 1 ) , we define
Θ n ( ρ , γ , β ) = t = 1 n log f ε t ρ , γ + β n log f [ ε t ( ρ , γ ) ] + o P ( 1 ) .
Then the log-likelihood ratio of H 0 against H β ( n ) is Θ ( ρ 0 , γ 0 , β ) . For ρ ˜ n lying between ρ n and ρ 0 , we write a second-order Taylor expansion of Π n ( ρ 0 , γ 0 , β ) around ρ n and obtain
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ n , γ 0 , β ) + ( ρ 0 ρ n ) ρ Π n ( ρ n , γ 0 , β ) + 1 2 ( ρ 0 ρ n ) ρ 2 Π n ( ρ ˜ n , γ 0 , β ) ( ρ 0 ρ n ) .
We wish to prove that, under H 0 , as n + ,
( ρ 0 ρ n ) ρ Π n ( ρ n , γ 0 , β ) = o P ( 1 )
1 2 ( ρ 0 ρ n ) ρ 2 Π n ( ρ ˜ n , γ 0 , β ) ( ρ 0 ρ n ) = o P ( 1 ) .
In order to simplify the notations, let T γ m , h = T / γ m , h .
1 n ρ 2 Π n ( ρ ˜ n , γ 0 , β ) = 1 n t = 1 n 1 V ( X t 1 ) m = 1 k + 1 h = 1 p β m , h ρ 2 T γ m , h ( ρ ˜ n , γ 0 , β ) ϕ f [ ε t ( ρ ˜ n , γ 0 ) ] 2 n t = 1 n 1 V 2 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h ρ T γ m , h ( ρ ˜ n , γ 0 , β ) ρ T ( ρ ˜ n , X t 1 ) ϕ f [ ε t ( ρ ˜ n , γ 0 ) ] 1 n t = 1 n 1 V 2 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h T γ m , h ( ρ ˜ n , γ 0 , β ) ρ 2 T ( ρ ˜ n , X t 1 ) ϕ f [ ε t ( ρ ˜ n , γ 0 ) ] + 1 n t = 1 n 1 V 3 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h T γ m , h ( ρ ˜ n , γ 0 , β ) ρ T ( ρ ˜ n , X t 1 ) ρ T ( ρ ˜ n , X t 1 ) × ϕ f [ ε t ( ρ ˜ n , γ 0 ) ] = Δ 1 , n ( ρ ˜ n , γ 0 , β ) + Δ 2 , n ( ρ ˜ n , γ 0 , β ) + Δ 3 , n ( ρ ˜ n , γ 0 , β ) + Δ 4 , n ( ρ ˜ n , γ 0 , β ) .
By a multiple use of a Taylor expansion, by the assumptions ( B 4 ), (A9), by the ergodic theorem, from a simple calculation, we prove that, for i = 1 , , 4 , | | | Δ i , n ( ρ ˜ n , γ 0 , β ) | | | p is bounded.
Thus, as n + , 1 / n | | | ρ 2 Π n ( ρ ˜ n , γ 0 , β ) | | | p tends in probability to a finite positive real number, denoted by b.
Recall from (A11) that, as n + , we have
1 2 ( ρ 0 ρ n ) ρ 2 Π n ( ρ ˜ n , γ 0 , β ) ( ρ 0 ρ n ) 1 2 n ( ρ 0 ρ n ) p | | | 1 n ρ 2 Π n ( ρ ˜ n , γ 0 , β ) | | | p ρ 0 ρ n p 1 2 n ( ρ 0 ρ n ) p ρ 0 ρ n p × b .
Since under H 0 , as n + , we have
n ( ρ 0 ρ n ) D N ( 0 , Σ ) ,
it results that
1 2 ( ρ 0 ρ n ) ρ 2 Π n ( ρ ˜ n , γ 0 , β ) ( ρ 0 ρ n ) = o P ( 1 ) .
From (A12), we can write
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ n , γ 0 , β ) + ( ρ 0 ρ n ) ρ Π n ( ρ n , γ 0 , β ) + o P ( 1 ) .
Now, we prove that
( ρ 0 ρ n ) ρ Π n ( ρ n , γ 0 , β ) = o P ( 1 ) .
Adding and subtracting appropriate terms, as n + , we can write
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ n , γ 0 , β ) + ( ρ 0 ρ n + ρ s ( n ) ρ s ( n ) ) ρ Π n ( ρ n , γ 0 , β ) + o P ( 1 ) = Π n ( ρ n , γ 0 , β ) + ( ρ 0 ρ s ( n ) ) ρ Π n ( ρ n , γ 0 , β ) + ( ρ s ( n ) ρ n ) ρ Π n ( ρ n , γ 0 , β ) + o P ( 1 ) ,
where { s ( n ) } n 1 stands for a sequence of positive integers such that n / s ( n ) 0 as n + .
Observing that, as n + ,
n ( ρ 0 ρ s ( n ) ) = s ( n ) ( ρ 0 ρ s ( n ) ) × n s ( n ) = o P ( 1 ) ,
it is easy to see that,
( ρ 0 ρ s ( n ) ) ρ Π n ( ρ n , γ 0 , β ) = n ( ρ 0 ρ s ( n ) ) 1 n ρ Π n ( ρ n , γ 0 , β ) .
Then, it suffices to show that ρ Π n ( ρ n , γ 0 , β ) / n converges in probability to a random vector. For this, we can write the following decomposition:
1 n ρ Π n ( ρ n , γ 0 , β ) = 1 n t = 1 n 1 V ( X t 1 ) m = 1 k + 1 h = 1 p β m , h ρ T γ m , h ( ρ n , γ 0 , X t 1 ) ϕ f [ ε t ( ρ n , γ 0 ) ] 1 n t = 1 n 1 V 2 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h T γ m , h ( ρ n , γ 0 , X t 1 ) ρ T ( ρ n , γ 0 , X t 1 ) ϕ f [ ε t ( ρ n , γ 0 ) ] = 1 , n ( ρ n , γ 0 , β ) + 2 , n ( ρ n , γ 0 , β ) .
Using the assumptions ( B 4 ) and (A5), and the ergodic theorem, since ε t is independent of F t 1 , the study of the asymptotic behavior of 1 , n ( ρ n , γ 0 , β ) and 2 , n ( ρ n , γ 0 , β ) shows that, as n + ,
1 , n ( ρ n , γ 0 , β ) P 0 R p .
Thus, 2 , n ( ρ n , γ 0 , β ) p tends in probability to a finite positive real number. Consequently,
1 n ρ Π n ( ρ n , γ 0 , β ) p 1 , n ( ρ n , γ 0 , β ) p + 2 , n ( ρ n , γ 0 , β ) p < .
It results that
( ρ 0 ρ s ( n ) ) ρ Π n ( ρ n , γ 0 , β ) = o P ( 1 ) .
Hence,
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ n , γ 0 , β ) + ( ρ s ( n ) ρ n ) ρ Π n ( ρ n , γ 0 , β ) + o P ( 1 ) .
In order to treat the above equation (A14), we need the following lemma.
Lemma A1.
Assume that ( B 2 ) holds. Let { s ( n ) } n 1 be a sequence of positive integers such that n / s ( n ) tends to 0 as n + . For γ 0 , β R p ( k + 1 ) , ρ s ( n ) is asymptotically in the tangent space T n to the curve of Π n ( ρ , γ , β ) at ρ n , defined as follows:
T n = z R p / Π n ( z , γ 0 , β ) = Π n ( ρ n , γ 0 , β ) + ( z ρ n ) ρ Π n ( ρ n , γ 0 , β ) .
Proof. 
Writing a second-order Taylor expansion of Π n ( ρ s ( n ) , γ 0 , β ) in a neighborhood of ρ n , for some ρ ˜ s ( n ) lying between ρ s ( n ) and ρ n , we obtain
Π n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ n , γ 0 , β ) + ( ρ s ( n ) ρ n ) ρ Π n ( ρ n , γ 0 , β ) + 1 2 ( ρ s ( n ) ρ n ) ρ 2 Π n ( ρ ˜ s ( n ) , γ 0 , β ) ( ρ s ( n ) ρ n ) .
To prove that, as n + , ρ s ( n ) belongs to T n , it suffices to show that ( ρ s ( n ) ρ n ) ρ 2 Π n   ( ρ ˜ s ( n ) , γ 0 , β ) ( ρ s ( n ) ρ n ) = o P ( 1 ) .
To find the asymptotic distribution of n ( ρ s ( n ) ρ n ) , we add and substract appropriate terms and we obtain
n ( ρ s ( n ) ρ n ) = s ( n ) ( ρ s ( n ) ρ 0 ) n s ( n ) + n ( ρ 0 ρ n ) = o P ( 1 ) + n ( ρ 0 ρ n ) .
Then, asymptotically, n ( ρ s ( n ) ρ n ) has the same distribution as n ( ρ 0 ρ n ) . This means that n ( ρ s ( n ) ρ n ) converges in distribution to a normal law.
Now, to prove that ( ρ s ( n ) ρ n ) ρ 2 Π n ( ρ ˜ s ( n ) , γ 0 , β ) ( ρ s ( n ) ρ n ) = o P ( 1 ) , it suffices to show that the sequence ρ 2 Π n ( ρ ˜ s ( n ) , γ 0 , β ) / n converges in probability to a random vector.
Recall that
1 n ρ 2 Π n ( ρ ˜ s ( n ) , γ 0 , β ) = Δ 1 , n ( ρ ˜ s ( n ) , γ 0 , β ) + Δ 2 , n ( ρ ˜ s ( n ) , γ 0 , β ) + Δ 3 , n ( ρ ˜ s ( n ) , γ 0 , β ) + Δ 4 , n ( ρ ˜ s ( n ) , γ 0 , β ) ,
where s ( n ) n 1 stands for a sequence of positive integers such that n / s ( n ) 0 as n + , ρ n is given by ( B 2 ), and ρ ˜ s ( n ) lies between ρ s ( n ) and ρ 0 .
We have, as n + , ρ ˜ s ( n ) ρ 0 ρ s ( n ) ρ 0 a . s . 0 , then,
ρ ˜ s ( n ) ρ 0 = o P ( 1 ) .
For some ρ ˜ n lying between ρ n and ρ 0 , we proved previously that | | | ρ 2 Π n ( ρ ˜ n , γ 0 , β ) | | | p / n converges in probability, as n + , to a finite positive number. By following the same lines, we can prove that | | | ρ 2 Π n ( ρ ˜ s ( n ) , γ 0 , β ) | | | p / n converges in probability to a positive finite number, where ρ ˜ s ( n ) lies between ρ s ( n ) and ρ 0 . Consequently,
( ρ s ( n ) ρ n ) ρ 2 Π n ( ρ ˜ s ( n ) , γ 0 , β ) ( ρ s ( n ) ρ n ) = o P ( 1 ) .
It results from Lemma A1 that, as n + ,   ρ s ( n ) belongs to the tangent space T n . Thus, by replacing z by ρ s ( n ) , we obtain
Π n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ n , γ 0 , β ) + ( ρ s ( n ) ρ n ) ρ Π n ( ρ n , γ 0 , β ) + o P ( 1 ) .
Finally, recalling (A14), we obtain
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ s ( n ) , γ 0 , β ) + o P ( 1 ) .

Appendix A.5.2. Proof of [ii]

To prove (12), it suffices to show that, as n + , η ^ n ( ρ ^ n , γ 0 , β ) η ( ρ 0 , γ 0 , β ) . For any β R p ( k + 1 ) , we have
η ^ n ( ρ ^ n , γ 0 , β ) η ( ρ 0 , γ 0 , β ) = j = 1 k + 1 1 h m p β j , h β j , m α ^ j η ^ j , 2 ( h , m ) ( ρ ^ n , γ 0 ) α j η j , 2 ( h , m ) ( ρ 0 , γ 0 ) .
We add and substract appropriate terms and we obtain
η ^ n ( ρ ^ n , γ 0 , β ) η ( ρ 0 , γ 0 , β ) = j = 1 k + 1 1 h m p β j , h β j , m ( α ^ j α j ) [ η ^ j , 2 ( h , m ) ( ρ ^ n , γ 0 ) η j , 2 ( h , m ) ( ρ 0 , γ 0 ) + η j , 2 ( h , m ) ( ρ 0 , γ 0 ) ] + j = 1 k + 1 1 h m p β j , h β j , m α j [ η ^ j , 2 ( h , m ) ( ρ ^ n , γ 0 ) η j , 2 ( h , m ) ( ρ 0 , γ 0 ) ] = j = 1 k + 1 1 h m p β j , h β j , m [ ( α ^ j α j ) [ η ^ j , 2 ( h , m ) ( ρ ^ n , γ 0 ) η j , 2 ( h , m ) ( ρ 0 , γ 0 ) ] + ( α ^ j α j ) η j , 2 ( h , m ) ( ρ 0 , γ 0 ) ] + j = 1 k + 1 1 h m p β j , h β j , m α j [ η ^ j , 2 ( h , m ) ( ρ ^ n , γ 0 ) η j , 2 ( h , m ) ( ρ 0 , γ 0 ) ] ,
where, for h = 1 , , p ,
η ^ j , 2 ( h , m ) ( ρ 0 , γ 0 ) = I ( f ) n j ( n ) t = τ j 1 τ j 1 V 2 ( x ) T γ j , h ( ρ 0 , γ 0 , x ) T γ j , m ( ρ 0 , γ 0 , x ) .
For all j = 1 , , k + 1 , we have
α ^ j α j n + 0 .
For j = 1 , , k + 1 and h = 1 , , p , using the assumption (A9) and the fact that the functions 1 V , T γ j , h are bounded, it follows from the Lebesgue’s convergence theorem that each term in the right-hand side of (A15) tends to 0.

Appendix A.6. Proof of Theorem 3

Appendix A.6.1. Proof of [i]

The statistic that we study is
T n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ s ( n ) , γ 0 , β ) ϑ ^ n ( ρ s ( n ) , γ 0 , β ) .
We proved in Proposition (2) that
Π n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ 0 , γ 0 , β ) + o P ( 1 ) .
Also, we proved that, as n + ,
ϑ ^ n ( ρ s ( n ) , γ 0 , β ) P ϑ ( ρ 0 , γ 0 , β ) .
Then, under H 0 and as n + , we have
lim n + ϑ ^ n ( ρ s ( n ) , γ 0 , β ) ϑ ( ρ 0 , γ 0 , β ) = 1 ,
and
T n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ 0 , γ 0 , β ) + o P ( 1 ) ϑ ^ n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ 0 , γ 0 , β ) ϑ ( ρ 0 , γ 0 , β ) × ϑ ( ρ 0 , γ 0 , β ) ϑ ^ n ( ρ s ( n ) , γ 0 , β ) + 1 ϑ ^ n ( ρ s ( n ) , γ 0 , β ) × o P ( 1 ) T n ( ρ 0 , γ 0 , β ) .

Appendix A.6.2. Proof of [ii] and [iii]

The convergence of ϑ ^ n ( ρ s ( n ) , γ 0 , β ) to ϑ ( ρ 0 , γ 0 , β ) remains true under the local alternatives H β ( n ) by contiguity. Then, under H β ( n ) and as n + , using Le Cam’s third lemma, we have
T n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ s ( n ) , γ 0 , β ) ϑ ^ n ( ρ s ( n ) , γ 0 , β ) D N ( ϑ ( ρ 0 , γ 0 , β ) , 1 ) ,
which shows that the power of the test remains the same.

Appendix A.7. Proof of Proposition 3

The proof relies on a number of lemmas that we state and prove.
Lemma A2.
Assume that (A1)–(A10), ( B 1 )–( B 5 ) and ( B 1 )–( B 3 ) hold. Then, for any sequence s ( n ) of positive integers satisfying, as n + , n / s ( n ) 0 , for any sequence of consistent and asymptotically normal estimators { γ ^ 0 , n } n 1 of γ 0 and for any β R p ( k + 1 ) , under H 0 and as n + ,
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ 0 , γ ^ 0 , s ( n ) , β ) + o P ( 1 ) .
Proof. 
For any ρ R p and ( γ , β ) R p ( k + 1 ) × R p ( k + 1 ) , for γ ^ 0 , n the maximum likelihood estimator of γ 0 , we write a first-order Taylor expansion of Π n ( ρ 0 , γ 0 , β ) in a neighborhood of γ ^ 0 , n and we obtain, for some γ ˜ ˜ 0 , n lying between γ 0 and γ ^ 0 , n ,
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ 0 , γ ^ 0 , n , β ) ( γ ^ 0 , n γ 0 ) γ Π n ( ρ 0 , γ ^ 0 , n , β ) + ( γ ^ 0 , n γ 0 ) γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) ( γ ^ 0 , n γ 0 ) ,
where
γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) = γ 1 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) γ 1 γ p 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) γ p γ 1 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) γ p 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) M p ( k + 1 ) .
Our aim is to prove that, under H 0 , as n + ,
( γ ^ 0 , n γ 0 ) γ Π n ( ρ 0 , γ ^ 0 , n , β ) = o P ( 1 ) ,
( γ ^ 0 , n γ 0 ) γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) ( γ ^ 0 , n γ 0 ) = o P ( 1 ) .
Starting with (A17), by multiplying and dividing by n , we observe that
( γ ^ 0 , n γ 0 ) γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) ( γ ^ 0 , n γ 0 ) n ( γ ^ 0 , n γ 0 ) p ( k + 1 ) × 1 n γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) p ( k + 1 ) × γ ^ 0 , n γ 0 p ( k + 1 ) .
For γ ^ 0 , n , an estimator of γ 0 , by following the same techniques as in Proposition 1, under H 0 , as n + , n ( γ ^ 0 , n γ 0 ) converges in distribution to a normal distribution and γ ^ 0 , n γ 0 tends to 0 in probability as n + .
Then, to prove (A17), it suffices to show that | | | γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) | | | p ( k + 1 ) / n tends in probability, as n + , to some positive random variable. Recalling (4), we have
Π n ( ρ , γ , β ) = 1 n t = 1 n 1 V ( X t 1 ) m = 1 k + 1 h = 1 p β m , h T γ m , h ( ρ , γ , β ) ϕ f [ ε t ( ρ , γ ) ] .
Then,
γ T γ m , h ( ρ 0 , γ , X t 1 ) = ω 1 γ 1 T γ m , h ( ρ 0 , γ , X t 1 ) , , ω k + 1 γ k + 1 T γ m , h ( ρ 0 , γ , X t 1 ) R p ( k + 1 ) ,
and
γ 2 T γ m , h ( ρ 0 , γ , X t 1 ) = ω 1 2 ( t ) H γ 1 T γ m , h ( ρ 0 , γ , X t 1 ) 0 0 0 ω 2 2 ( t ) H γ 2 T γ m , h ( ρ 0 , γ , X t 1 ) 0 0 0 ω k + 1 2 ( t ) H γ k + 1 T γ m , h ( ρ 0 , γ , X t 1 ) M p ( k + 1 ) ( R ) ,
where, for any i = 1 , , k + 1 ,   H γ i T γ m , h ( ρ 0 , γ , X t 1 ) is the Hessian matrix of T γ m , h with respect to γ i .
Recall that we wish to bound 1 n | | | γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) | | | p ( k + 1 ) . For any β R p ( k + 1 ) , we have
1 n γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) = 1 n t = 1 n 1 V ( X t 1 ) m = 1 k + 1 h = 1 p β m , h γ 2 T γ m , h ( ρ 0 , γ ˜ ˜ 0 , n , X t 1 ) ϕ f [ ε t ( ρ 0 , γ ˜ ˜ 0 , n ) ] 2 n t = 1 n 1 V 2 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h ( γ T γ m , h ( ρ 0 , γ ˜ ˜ 0 , n , X t 1 ) ) ( γ T ( ρ 0 , γ ˜ ˜ 0 , n , X t 1 ) ) × ϕ f [ ε t ( ρ 0 , γ ˜ ˜ 0 , n ) ] 1 n t = 1 n 1 V 2 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h T γ m , h ( ρ 0 , γ ˜ ˜ 0 , n , X t 1 ) × γ 2 T ( ρ 0 , γ ˜ ˜ 0 , n , X t 1 ) ϕ f [ ε t ( ρ 0 , γ ˜ ˜ 0 , n ) ] + 1 n t = 1 n 1 V 2 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h × T γ m , h ( ρ 0 , γ ˜ ˜ 0 , n , X t 1 ) γ T ( ρ 0 , γ ˜ ˜ 0 , n , X t 1 ) γ T ( ρ 0 , γ ˜ ˜ 0 , n , X t 1 ) × ϕ f [ ε t ( ρ 0 , γ ˜ ˜ 0 , n ) ] = χ 1 , n ( ρ 0 , γ ˜ ˜ 0 , n , β ) χ 2 , n ( ρ 0 , γ ˜ ˜ 0 , n , β ) χ 3 , n ( ρ 0 , γ ˜ ˜ 0 , n , β ) + χ 4 , n ( ρ 0 , γ ˜ ˜ 0 , n , β ) .
By assumptions ( B 1 ), (A9), (A4) and (A5), using multiple Taylor expansion and the ergodic theorem, since ε t is independent of F t 1 , the study of the asymptotic behavior of χ i , n ,   i = 1 , , 4 , shows that
| | | χ i , n ( ρ 0 , γ ˜ ˜ 0 , n , β ) | | | p ( k + 1 ) < .
Then, for i = 1 , , 4 , we proved that | | | χ i , n ( ρ ˜ n , γ 0 , β ) | | | p ( k + 1 ) converges to a finite positive number. From this, we find that γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) p ( k + 1 ) / n converges to a finite positive number.
For proving (A17), one can write
( γ ^ 0 , n γ 0 ) γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) ( γ ^ 0 , n γ 0 ) 1 n ( γ ^ 0 , n γ 0 ) p ( k + 1 ) 1 n | | | γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) | | | p ( k + 1 ) γ ^ 0 , n γ 0 ) p ( k + 1 ) .
Using Proposition 1, we can see that, under H 0 , γ ^ 0 , n γ 0 p ( k + 1 ) / n converges to a finite positive number. Since γ ^ 0 , n γ 0 tends to 0 R p ( k + 1 ) in probability, and γ 2 Π n ( ρ 0 , γ ˜ 0 , n , β ) p ( k + 1 ) / n converges under H 0 to some finite positive number, it follows that
( γ ^ 0 , n γ 0 ) γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) ( γ ^ 0 , n γ 0 ) = o P ( 1 ) .
Now, we prove that
( γ 0 γ ^ 0 , n ) γ Π n ( ρ 0 , γ ^ 0 , n , β ) = o P ( 1 ) .
By adding and subtracting appropriate terms, we obtain
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ 0 , γ ^ 0 , n , β ) + ( γ 0 γ ^ 0 , s ( n ) ) γ Π n ( ρ 0 , γ ^ 0 , n , β ) + ( γ ^ 0 , s ( n ) γ ^ 0 , n ) γ Π n ( ρ 0 , γ ^ 0 , n , β ) + o P ( 1 ) ,
where { s ( n ) } n 1 stands for a sequence of positive integers such that n / s ( n ) 0 as n + .
Observing that, as n + ,
n ( γ 0 γ ^ 0 , s ( n ) ) = s ( n ) ( γ 0 γ ^ 0 , s ( n ) ) × n s ( n ) = o P ( 1 ) ,
it is easy to see that,
( γ 0 γ ^ 0 , s ( n ) ) γ Π n ( ρ 0 , γ ^ 0 , n , β ) = n ( γ 0 γ ^ 0 , s ( n ) ) 1 n γ Π n ( ρ 0 , γ ^ 0 , n , β ) .
By assumptions ( B 1 ), (A9), ( B 5 ) and (A5), using a suitable application of the ergodic theorem, since ε t is independent of F t 1 , we can show that
γ 0 γ ^ 0 , s ( n ) γ Π n ( ρ 0 , γ ^ 0 , n , β ) = o P ( 1 ) .
Thus,
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ 0 , γ ^ 0 , n , β ) + γ ^ 0 , s ( n ) γ ^ 0 , n γ Π n ( ρ 0 , γ ^ 0 , n , β ) + o P ( 1 ) .
To treat the above equation, we need the following lemma.
Lemma A3.
Let γ ^ 0 , n be a consistent and asymptotically normal estimator of γ 0 R p ( k + 1 ) . Let s ( n ) n 1 be a sequence of positive integers such that n / s ( n ) tends to 0 as n + . For ρ 0 R p and β R p ( k + 1 ) , γ ^ 0 , s ( n ) is asymptotically in the tangent space T ˜ n to the curve Π n ( ρ , γ , β ) at γ ^ 0 , n , defined as follows:
T ˜ n = y R p ( k + 1 ) / Π n ( ρ 0 , y , β ) = Π n ( ρ 0 , γ ^ 0 , n , β ) + ( y γ ^ 0 , n ) γ Π n ( ρ 0 , γ ^ 0 , n , β ) .
Proof. 
Writing a second-order Taylor expansion of Π n ( ρ 0 , γ ^ 0 , n , β ) in a neighborhood of γ ^ 0 , n , for some γ ^ ˜ 0 , s ( n ) lying between γ ^ 0 , s ( n ) and γ ^ 0 , n , we obtain
Π n ( ρ 0 , γ ^ 0 , s ( n ) , β ) = Π n ( ρ 0 , γ ^ 0 , n , β ) + γ ^ 0 , s ( n ) γ ^ 0 , n γ Π n ( ρ 0 , γ ^ 0 , n , β ) + 1 2 γ ^ 0 , s ( n ) γ ^ 0 , n γ 2 Π n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) γ ^ 0 , s ( n ) γ ^ 0 , n .
To prove that, as n + ,   γ ^ 0 , n belongs to T ˜ n , it suffices to show that
γ ^ 0 , s ( n ) γ ^ 0 , n γ 2 Π n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) γ ^ 0 , s ( n ) γ ^ 0 , n = o P ( 1 ) .
Then, we study the asymptotic distribution of n γ ^ 0 , s ( n ) γ ^ 0 , n . By adding and subtracting appropriate terms, we obtain
n ( γ ^ 0 , s ( n ) γ ^ 0 , n ) = s ( n ) γ ^ 0 , s ( n ) γ 0 n s ( n ) + n γ 0 γ ^ 0 , n = o P ( 1 ) + n γ 0 γ ^ 0 , n .
It is easy to see that, as n + , n ( γ ^ 0 , s ( n ) γ ^ 0 , n ) has the same distribution as n ( γ 0 γ ^ 0 , n ) and then, n ( γ ^ 0 , s ( n ) γ ^ 0 , n ) converges in distribution to a normal random vector.
To prove that γ ^ 0 , s ( n ) γ ^ 0 , n γ 2 Π n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) γ ^ 0 , s ( n ) γ ^ 0 , n = o P ( 1 ) , it suffices to show that γ 2 Π n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) / n converges in probability to a random vector.
Now, we write
1 n γ 2 Π n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) = χ 1 , n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) χ 2 , n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) χ 3 , n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) + χ 4 , n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) ,
where { s ( n ) } n 1 stands for a sequence of positive integers satisfying n / s ( n ) 0 as n + , γ ^ 0 , n is an asymptotically normal estimator of γ 0 and γ ^ ˜ 0 , s ( n ) lies between γ ^ 0 , s ( n ) and γ 0 .
Previously, we proved that γ 2 Π n ( ρ 0 , γ ˜ ˜ 0 , n , β ) p ( k + 1 ) / n converges in probability to a positive random variable where γ ˜ ˜ 0 , n lies between γ ^ 0 , n and γ 0 . Following the same techniques, we can prove that the sequence γ 2 Π n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) p ( k + 1 ) / n converges in probability to a positive random variable, where γ ^ ˜ 0 , s ( n ) lies between γ ^ 0 , s ( n ) and γ 0 .
It results that
γ ^ 0 , s ( n ) γ ^ 0 , n γ 2 Π n ( ρ 0 , γ ^ ˜ 0 , s ( n ) , β ) γ ^ 0 , s ( n ) γ ^ 0 , n = o P ( 1 ) .
It follows from Lemma A3 that, as n + , γ ^ 0 , s ( n ) belongs to the tangent space T ˜ n . Replacing y by γ ^ 0 , s ( n ) , we obtain
Π n ( ρ 0 , γ ^ 0 , s ( n ) , β ) = Π n ( ρ 0 , γ ^ 0 , n , β ) + ( γ ^ 0 , s ( n ) γ ^ 0 , n ) γ Π n ( ρ 0 , γ ^ 0 , n , β ) + o P ( 1 )
Recalling (A20), we obtain
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ 0 , γ ^ 0 , s ( n ) , β ) + o P ( 1 ) .
Now we need the following lemma.
Lemma A4.
Assume that (A1)–(A10), ( B 1 )–( B 5 ) and ( B 1 )–( B 3 ) hold. Let { γ ^ 0 , n } n 1 be a sequence of consistent and asymptotically normal estimators of γ 0 . Let s ( n ) be any sequence of positive integers such that n / s ( n ) 0 as n + . Then, for any β R p ( k + 1 ) , under H 0 , as n + , we have
Π n ( ρ n , γ 0 , β ) = Π n ( ρ n , γ ^ 0 , s ( n ) , β ) + o P ( 1 ) .
Proof. 
Following the same techniques as above and by applying Taylor expansion of Π n ( ρ n , γ 0 , β ) in a neighborhood of γ ^ 0 , n , for some γ ˜ ˜ 0 , n lying between γ 0 and γ ^ 0 , n , we obtain
Π n ( ρ n , γ 0 , β ) = Π n ( ρ n , γ ^ 0 , n , β ) γ ^ 0 , n γ 0 γ Π n ( ρ n , γ ^ 0 , n , β ) + 1 2 ( γ ^ 0 , n γ 0 ) γ 2 Π n ( ρ n , γ ˜ ˜ 0 , n , β ) ( γ ^ 0 , n γ 0 ) .
We have
( γ ^ 0 , n γ 0 ) γ 2 Π n ( ρ n , γ ˜ ˜ 0 , n , β ) ( γ ^ 0 , n γ 0 ) n ( γ ^ 0 , n γ 0 ) p ( k + 1 ) × 1 n γ 2 Π n ( ρ n , γ ˜ ˜ 0 , n , β )   p ( k + 1 ) γ ^ 0 , n γ 0 p ( k + 1 ) .
Recall that, under H 0 , n ( γ ^ 0 , n γ 0 ) converges to a finite Gaussian random vector and, almost surely, as n + , γ ˜ 0 , n γ 0 tends to 0 R p ( k + 1 ) . We study the convergence of γ 2 Π n ( ρ n , γ ˜ ˜ 0 , n , β ) / n .
Based on the proof of Lemma A2, we have
1 n γ 2 Π n ( ρ n , γ ˜ ˜ 0 , n , β ) = 1 n t = 1 n 1 V ( X t 1 ) m = 1 k + 1 h = 1 p β m , h γ 2 T γ m , h ( ρ n , γ ˜ ˜ 0 , n , X t 1 ) ϕ f [ ε t ( ρ n , γ ˜ ˜ 0 , n ) ] 2 n t = 1 n 1 V 2 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h ( γ T γ m , h ( ρ n , γ ˜ ˜ 0 , n , X t 1 ) ) ( γ T ( ρ n , γ ˜ ˜ 0 , n ) ) × ϕ f [ ε t ( ρ n , γ ˜ ˜ 0 , n ) ] 1 n t = 1 n 1 V 2 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h T γ m , h ( ρ n , γ ˜ ˜ 0 , n , X t 1 ) × γ 2 T ( ρ n , γ ˜ ˜ 0 , n , X t 1 ) ϕ f [ ε t ( ρ n , γ ˜ ˜ 0 , n ) ] + 1 n t = 1 n 1 V 2 ( X t 1 ) m = 1 k + 1 h = 1 p β m , h × T γ m , h ( ρ n , γ ˜ ˜ 0 , n , X t 1 ) γ T ( ρ n , γ ˜ ˜ 0 , n , X t 1 ) γ T ( ρ n , γ ˜ ˜ 0 , n , X t 1 ) × ϕ f [ ε t ( ρ n , γ ˜ ˜ 0 , n ) ] .
Based on the assumptions ( B 1 )–( B 5 ) and ( B 1 )–( B 3 ) and using the same techniques, we can prove that ( 1 / n ) × | | | γ 2 Π n ( ρ n , γ ˜ ˜ 0 , n , β ) | | | p ( k + 1 ) converges to a finite positive number. Thus, ( 1 / n ) × | | | γ 2 Π n ( ρ n , γ ˜ ˜ 0 , n , β ) | | | p ( k + 1 ) converges almost surely to a finite random positive number.
From these results, we can conclude that
( γ ^ 0 , n γ 0 ) γ 2 Π n ( ρ n , γ ˜ ˜ 0 , n , β ) ( γ ^ 0 , n γ 0 ) = o P ( 1 ) .
Then, we can write
Π n ( ρ n , γ 0 , β ) = Π n ( ρ n , γ ^ 0 , n , β ) + γ 0 γ ^ 0 , n γ Π n ( ρ n , γ ^ 0 , n , β ) + o P ( 1 ) .
Adding and subtracting appropriate terms, we obtain
Π n ( ρ n , γ 0 , β ) = Π n ( ρ n , γ ^ 0 , n , β ) + γ 0 γ ^ 0 , s ( n ) γ Π n ( ρ n , γ ^ 0 , n , β ) + γ ^ 0 , s ( n ) γ ^ 0 , n γ Π n ( ρ n , γ ^ 0 , n , β ) + o P ( 1 ) .
By the assumptions ( B 1 )–( B 5 ) and by the ergodic theorem, we can prove that
Π n ( ρ n , γ 0 , β ) = Π n ( ρ n , γ ^ 0 , s ( n ) , β ) + o P ( 1 ) ,
where { s ( n ) } ) n 1 is a sequence of positive integers such that, as n + , n / s ( n ) 0 . Returning to the proof of Proposition 3, and using Lemma A2, we have
Π n ( ρ n , γ 0 , β ) = Π n ( ρ n , γ ^ 0 , s ( n ) , β ) + o P ( 1 ) .
We can write
Π n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) + o P ( 1 ) .
From Proposition 2, we have
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ s ( n ) , γ 0 , β ) + o P ( 1 ) ,
and
Π n ( ρ 0 , γ 0 , β ) Π n ( ρ s ( n ) , γ s ( n ) , β ) = Π n ( ρ 0 , γ 0 , β ) Π n ( ρ s ( n ) , γ 0 , β ) + o P ( 1 ) .
Finally, using the above results, we can write
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) + o P ( 1 )

Appendix A.8. Proof of Theorem 4

Appendix A.8.1. Proof of [i]

The test statistic is
T n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) = Π n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) .
We proved in Proposition 3 that
Π n ( ρ 0 , γ 0 , β ) = Π n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) + o P ( 1 ) .
We also proved that, as n + ,
ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) P ϑ ( ρ 0 , γ 0 , β ) .
Thus, under H 0 , as n + , we have
lim n + ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) ϑ ( ρ 0 , γ 0 , β ) = 1 ,
and
T n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) = Π n ( ρ 0 , γ 0 , β ) ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) + 1 ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) × o P ( 1 ) = Π n ( ρ 0 , γ 0 , β ) ϑ ( ρ 0 , γ 0 , β ) × ϑ ( ρ 0 , γ 0 , β ) ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) + 1 ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) × o P ( 1 ) P T n ( ρ 0 , γ 0 , β ) .

Appendix A.8.2. Proof of [ii] and [iii]

The convergence of ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) to ϑ ( ρ 0 , γ 0 , β ) remains true under the local alternative H β ( n ) by contiguity. Then, under H β ( n ) and as n + , using Le Cam’s third lemma, we have
T n ( ρ s ( n ) , γ 0 , β ) = Π n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) ϑ ^ n ( ρ s ( n ) , γ ^ 0 , s ( n ) , β ) D N ( ϑ ( ρ 0 , γ 0 , β ) , 1 ) .
Then, we can say that the power of the test remains the same.
For more proof details, see [36].

References

  1. Härdle, W.; Tsybakov, A.; Yang, L. Nonparametric vector autoregression. J. Stat. Plan. Inference 1998, 68, 221–245. [Google Scholar] [CrossRef]
  2. Chen, G.; Gan, M.; Chen, G. Generalized exponential autoregressive models for nonlinear time series: Stationarity, estimation and applications. Inf. Sci. 2018, 438, 46–57. [Google Scholar] [CrossRef]
  3. Yau, Y.C.; Zhao, Z. The asymptotic behavior of the likelihood ratio statistic for testing a shift in mean in a sequence of independent normal variates. J. R. Statist. Soc. 2016, 48, 895–916. [Google Scholar] [CrossRef]
  4. Ngatchou-Wandji, J.; Ltaifa, M. Detecting weak changes in the mean of a class of nonlinear heteroscedastic models. Commun. Stat. Simul. Comput. 2023, 1–33. [Google Scholar] [CrossRef]
  5. Csörgő, M.; Horváth, L.; Szyszkowicz, B. Integral tests for suprema of kiefer processes with application. Stat. Risk Model. 1997, 15, 365–378. [Google Scholar] [CrossRef]
  6. MacNeill, I.B. Tests for change of parameter at unknown times and distributions of some related functionals on brownian motion. Ann. Stat. 1974, 2, 950–962. [Google Scholar] [CrossRef]
  7. Chernoff, H.; Zacks, S. Estimating the current mean of a normal distribution which is subjected to changes in time. Ann. Math. Stat. 1964, 35, 999–1018. [Google Scholar] [CrossRef]
  8. Davis, R.A.; Huang, D.; Yao, Y.-C. Testing for a change in the parameter values and order of an autoregressive model. Ann. Stat. 1995, 23, 282–304. [Google Scholar] [CrossRef]
  9. Vogelsang, T.J. Wald-type tests for detecting breaks in the trend function of a dynamic time series. Econom. Theory 1997, 13, 818–848. [Google Scholar] [CrossRef]
  10. Andrews, D.W.; Ploberger, W. Optimal tests when a nuisance parameter is present only under the alternative. Econom. J. Econom. Soc. 1994, 62, 1383–1414. [Google Scholar] [CrossRef]
  11. Andrews, D.W. Tests for parameter instability and structural change with unknown change point. Econom. J. Econom. Soc. 1993, 61, 821–856. [Google Scholar] [CrossRef]
  12. Lavielle, M.; Lebarbier, E. An application of MCMC methods for the multiple change-points problem. Signal Process. 2001, 81, 39–53. [Google Scholar] [CrossRef]
  13. Lebarbier, É. Detecting multiple change-points in the mean of Gaussian process by model selection. Signal Process. 2005, 85, 717–736. [Google Scholar] [CrossRef]
  14. Davis, R.A.; Lee, T.C.; Rodriguez-Yam, G.A. Break detection for a class of nonlinear time series models. J. Time Ser. Anal. 2008, 29, 834–867. [Google Scholar] [CrossRef]
  15. Fotopoulos, S.B.; Jandhyala, V.K.; Tan, L. Asymptotic study of the change-point mle in multivariate gaussian families under contiguous alternatives. J. Stat. Plan. Inference 2009, 139, 1190–1202. [Google Scholar] [CrossRef]
  16. Huh, J. Detection of a change point based on local-likelihood. J. Multivar. Anal. 2010, 101, 1681–1700. [Google Scholar] [CrossRef]
  17. Jarušková, D. Asymptotic behaviour of a test statistic for detection of change in mean of vectors. J. Stat. Plan. Inference 2010, 140, 616–625. [Google Scholar] [CrossRef]
  18. Piterbarg, V.I. High Derivations for Multidimensional Stationary Gaussian Process with Independent Components. In Stability Problems for Stochastic Models; De Gruyter: Berlin, Germany; Boston, MA, USA, 1994; pp. 197–210. [Google Scholar]
  19. Chen, K.; Cohen, A.; Sackrowitz, H. Consistent multiple testing for change points. J. Multivar. Anal. 2011, 102, 1339–1343. [Google Scholar] [CrossRef]
  20. Vostrikova, L.Y. Detecting “disorder” in multidimensional random processes. In Doklady Akademii Nauk; Russian Academy of Sciences: Moscow, Russia, 1981; Volume 259, pp. 270–274. [Google Scholar]
  21. Cohen, A.; Sackrowitz, H.B.; Xu, M. A new multiple testing method in the dependent case. Ann. Stat. 2009, 37, 1518–1544. [Google Scholar] [CrossRef]
  22. Prášková, Z.; Chochola, O. M-procedures for detection of a change under weak dependence. J. Stat. Plan. Inference 2014, 149, 60–76. [Google Scholar] [CrossRef]
  23. Dupuis, D.; Sun, Y.; Wang, H.J. Detecting Change-Points in Extremes; International Press of Boston: Boston, MA, USA, 2015. [Google Scholar]
  24. Badagián, A.L.; Kaiser, R.; Peña, D. Time series segmentation procedures to detect, locate and estimate change-points. In Empirical Economic and Financial Research; Springer: Berlin/Heidelberg, Germany, 2015; pp. 45–59. [Google Scholar]
  25. Yau, C.Y.; Zhao, Z. Inference for multiple change points in time series via likelihood ratio scan statistics. J. R. Stat. Soc. Ser. (Stat. Methodol.) 2016, 78, 895–916. [Google Scholar] [CrossRef]
  26. Ruggieri, E.; Antonellis, M. An exact approach to bayesian sequential change-point detection. Comput. Stat. Data Anal. 2016, 97, 71–86. [Google Scholar] [CrossRef]
  27. Horváth, L.; Miller, C.; Rice, G. A new class of change point test statistics of Rényi type. J. Bus. Econ. Stat. 2020, 38, 570–579. [Google Scholar] [CrossRef]
  28. Ltaifa, M. Tests Optimaux pour Détecter les Signaux Faibles dans les Séries Chronologiques. Ph.D. Thesis, Université de Lorraine, Nancy, France, Université de Sousse, Sousse, Tunisia, 2021. [Google Scholar]
  29. Bahadur, R.R.; Rao, R.R. On deviations of the sample mean. Ann. Math. Stat. 1960, 31, 1015–1027. [Google Scholar] [CrossRef]
  30. Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  31. Hall, P.; Heyde, C.C. Martingale Limit Theory and Its Application; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  32. Le Cam, L. The central limit theorem around 1935. Stat. Sci. 1986, 1, 78–91. [Google Scholar]
  33. Droesbeke and Fine, J.-J. Inférence Non Paramétrique: Les Statistiques de Rangs; Éditions de l’Université de Bruxelles: Bruxelles, Belgium; Éditions Ellipses: Paris, France, 1996. [Google Scholar]
  34. Ngatchou-Wandji, J. Estimation in a class of nonlinear heteroscedastic time series models. Electron. J. Stat. 2008, 2, 40–62. [Google Scholar] [CrossRef]
  35. Ngatchou-Wandji, J. Checking nonlinear heteroscedastic time series models. J. Stat. Plan. Inference 2005, 133, 33–68. [Google Scholar] [CrossRef]
  36. Salman, Y. Testing a Class of Time-Varying Coefficients CHARN Models with Application to Change-Point Study. Ph.D. Thesis, Lorraine University, Nancy, France, Lebanese University, Beirut, Lebanese, 2022. [Google Scholar]
Figure 1. Power of the test with respect to β in a class of AR(1) models when f is a standard Gaussian density.
Figure 1. Power of the test with respect to β in a class of AR(1) models when f is a standard Gaussian density.
Mathematics 12 02092 g001
Figure 2. Power of the test with respect to β in a class of AR(1) model when f is the kernel estimated.
Figure 2. Power of the test with respect to β in a class of AR(1) model when f is the kernel estimated.
Mathematics 12 02092 g002
Figure 3. No break in the data.
Figure 3. No break in the data.
Mathematics 12 02092 g003
Figure 4. Estimated change points in the S&P 500 indices.
Figure 4. Estimated change points in the S&P 500 indices.
Mathematics 12 02092 g004
Figure 5. Estimated change points in the residual series of S&P 500 indices.
Figure 5. Estimated change points in the residual series of S&P 500 indices.
Mathematics 12 02092 g005
Table 1. Break location in an AR(1) model with the corresponding RMSE.
Table 1. Break location in an AR(1) model with the corresponding RMSE.
τ 1 β 1 , 1 β 1 , 2
2 1 1 2 2 1 2 3 3 2
8078 (4.32)81 (1.41)82 (4.01)80 (0.654)80 (0.34)
10099 (6.96)99 (4.67)102 (3.34)101 (1.23)100 (0.54)
120122 (5.12)121 (2.32)119 (2.45)120 (0.23)121 (1.23)
140143 (10.24)142 (2.22)141 (2.35)140 (0.22)141 (1.12)
160164 (11.65)164 (4.43)162 (2.23)161 (1.32)161 (1.12)
185188 (8.21)189 (5.32)190 (6.33)190 (7.43)189 (8.23)
Table 2. Break location estimation in a class of AR(1) models for a fixed ζ .
Table 2. Break location estimation in a class of AR(1) models for a fixed ζ .
( β 1 , β 2 , β 3 ) τ = ( τ 1 , τ 2 , τ 3 ) = ( 90 , 190 , 275 )
and  ζ = 0 . 1
3 2 , 1 3 , 1 1 (93,193,277) (5.78)
1 0.5 , 2 1 , 1 1 (93,194,278) (9.87)
2 1.5 , 1 3 , 0.5 1 (92,193,277) (8.99)
Table 3. Break location estimation in AR-ARCH models.
Table 3. Break location estimation in AR-ARCH models.
( β 1 , β 2 , β 3 ) τ = ( τ 1 , τ 2 , τ 3 ) = ( 90 , 190 , 275 )
and  ζ = 0 . 1
3 2 , 1 3 , 1 1 (93,193,277) (9.43)
1 0.5 , 2 1 , 1 1 (93,194,278) (10.64)
2 1.5 , 1 3 , 0.5 1 (92,193,277) (6.10)
Table 4. Break location estimation obtained by our method and SCUSUM for different instants of break τ 1 and different magnitudes of change.
Table 4. Break location estimation obtained by our method and SCUSUM for different instants of break τ 1 and different magnitudes of change.
τ 1 = 80 β 1 , 1 β 1 , 2
2 1 1 2 2 1 2 3 3 2
Our method8381828080
SCUSUM95998612299
τ 1 = 120
Our method122121121120121
SCUSUM110114105125122
τ 1 = 185
Our method189188186186185
SCUSUM105103122114101
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salman, Y.; Ngatchou-Wandji, J.; Khraibani, Z. Testing a Class of Piece-Wise CHARN Models with Application to Change-Point Study. Mathematics 2024, 12, 2092. https://doi.org/10.3390/math12132092

AMA Style

Salman Y, Ngatchou-Wandji J, Khraibani Z. Testing a Class of Piece-Wise CHARN Models with Application to Change-Point Study. Mathematics. 2024; 12(13):2092. https://doi.org/10.3390/math12132092

Chicago/Turabian Style

Salman, Youssef, Joseph Ngatchou-Wandji, and Zaher Khraibani. 2024. "Testing a Class of Piece-Wise CHARN Models with Application to Change-Point Study" Mathematics 12, no. 13: 2092. https://doi.org/10.3390/math12132092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop