Next Article in Journal
Existence and Nonexistence Results for a Fourth-Order Boundary Value Problem with Sign-Changing Green’s Function
Previous Article in Journal
Derivation of Analytical Expressions for Fast Calculation of Resistance Spot Welding System Currents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Testing Coefficient Randomness in Multivariate Random Coefficient Autoregressive Models Based on Locally Most Powerful Test

School of Mathematics and Statistics, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(16), 2455; https://doi.org/10.3390/math12162455
Submission received: 2 July 2024 / Revised: 2 August 2024 / Accepted: 3 August 2024 / Published: 7 August 2024
(This article belongs to the Section Probability and Statistics)

Abstract

:
The multivariate random coefficient autoregression (RCAR) process is widely used in time series modeling applications. Random autoregressive coefficients are usually assumed to be independent and identically distributed sequences of random variables. This paper investigates the issue of coefficient constancy testing in a class of static multivariate first-order random coefficient autoregressive models. We construct a new test statistic based on the locally most powerful-type test and derive its limiting distribution under the null hypothesis. The simulation compares the empirical sizes and powers of the LMP test and the empirical likelihood test, demonstrating that the LMP test outperforms the EL test in accuracy by 10.2%, 10.1%, and 30.9% under conditions of normal, Beta-distributed, and contaminated errors, respectively. We provide two sets of real data to illustrate the practical effectiveness of the LMP test.

1. Introduction

The autoregressive model proposed by Yule [1] is one of the most classical models in time series analysis. It has been extensively studied for its theoretical properties and applications. See, for example, Box and Jenkins [2] and Maleki and Nematollahi [3]. In recent years, statisticians have shown increased interest in multivariate time series. As a result, multivariate autoregressive models have been widely studied in the fields of traffic prediction (Dmitry [4]) and neuroscience (Gilson et al. [5]). The multivariate first-order autoregressive (AR(1)) model is defined as
X t = A X t 1 + Z t , t N ,
where A is an n × n matrix with constant elements and Z t is an independent and identically distributed (i.i.d.) random error vector of with E ( Z t ) = 0 . However, the coefficients of a time series may not be determined in practical problems. To enhance the utility of autoregressive models, Anděl [6] extended the multivariate AR model to include the random coefficients case and proposes a multivariate random coefficients autoregressive model.
The multivariate first-order random coefficient autoregressive (RCAR(1)) model is defined as
X t = A t X t 1 + Z t , t N ,
where A t is an i.i.d. random sequence and Z t is a sequence of dependent random errors. Z t ,   A t X t 1 and X s are independent of each other for fixed t and all s < t.
The inference for the univariate RCAR(1) model has been extensively developed. RCAR models are essential in various fields, including predicting, financial, biology, meteorology, and other practical applications. Stationarity, ergodicity, and finiteness of moments for RCAR processes are discussed by Feigin and Tweedie [7]. Aue et al. [8] demonstrated that under optimal assumptions, the quasi maximum likelihood estimator in the RCAR(1) model exhibits asymptotic normality and strong consistency. Aue and Horváth [9] introduced a unified quasi-likelihood approach to estimating the unknown parameters of the RCAR(1) model. This approach works for both stationary and nonstationary processes. Inspired by Dette and Wied [10], Horváth and Trapani [11] and Li Bi et al. [12] proposed tests to determine whether the autoregressive coefficient is truly random or not. Horváth and Trapani [11] introduced a useful test to assess randomness within the RCAR(1) model using the two-stage WLS method, which can be applied even in the case when X t is non-stationary. Wang et al. [13] derived the asymptotic characteristics of the maximum likelihood estimator for parameters in the RCAR(1) model under the assumption of a unit root.
The increase in unknown parameters has significantly complicated the analysis of multivariate RCAR(1) models. Currently, there is limited research on this subject. Chen et al. [14] tested the randomness of coefficients in the multivariate RCAR(1) model based on the empirical likelihood method. Furthermore, the multivariate RCAR(1) model was researched by Prášková and Vaněček [15], Vaněček [16], and Nicholls and Quinn [17]. In this paper, we employ the locally most powerful-type (LMP) method to test the randomness of coefficients for the multivariate RCAR(1) model. The LMP test has been studied by various researchers. Manik et al. [18], Chikkagoudar and Biradar [19], and Rohatgi et al. [20] performed investigations using the LMP test to examine the stability of parameters. Some advantages of this method include its ability to generate stable sizes when parameters are close to their true values and the relatively simple calculation process. Moreover, our testing method exhibits a higher test performance compared to some traditional testing methods. In this article, we demonstrate the implementation of our proposed method for a finite number of samples, both under contamination errors and normal conditions. Numerical simulations indicate that our test outperforms the empirical likelihood (EL) test proposed by Chen et al. [14] in terms of maintaining both empirical sizes and powers.
The remainder of the paper is organized as follows. In Section 2, under the null hypothesis, we provide the LMP test statistic and its asymptotic distribution. Additionally, in Section 3, we conduct numerous numerical simulation studies and present the results. In Section 4, two practical examples are used to illustrate the superiority and justification of our approach. The proofs are provided in Appendix A.

2. Methodology and Main Results

In this paper, we study the multivariate first-order random coefficient autoregressive model. The definition is as follows:
X t = A t X t 1 + Z t = α 1 , t α 2 , t α n , t n × n X 1 , t 1 X 2 , t 1 X n , t 1 n × 1 + Z 1 , t Z 2 , t Z n , t n × 1
where α i , t , i = 1 , 2 , , n are independent and identically distributed (i.i.d.) random sequences with probability density function F α i , and E ( α i , t ) = μ α i ,   V a r ( α i , t ) = σ α i 2 , i = 1 , 2 , , n .   { Z i , t , i = 1 , 2 , , n } are i.i.d. sequences with joint probability density function f z ( z 1 , t , z 2 , t , , z n , t ) > 0 , where E ( Z i , t ) = 0 , Var ( Z i , t ) = σ z i 2 , Cov Z i , t , Z j , t = ρ i , j σ z i σ z j , i , j = 1 , 2 , , n , i j , for which ρ z i , t , z j , t ρ i , j .
In this section, in order to test the randomness of coefficients, we establish a test statistic to determine if matrix A t is a constant matrix, which is equivalent to testing whether = i = 1 n σ α i 2 , i = 1 , 2 , , n is 0 . To this end, we consider the following hypothesis test:
H 0 : = 0 H 1 : > 0 .
Suppose that the time series data { X t } t Z are generated from (3). Denote θ 1 = E A t = μ α 1 , , μ α n T , θ 2 = V a r ( Z t ) = σ z 1 2 , , σ z n 2 T ,   θ 3 = ρ 1 , 2 , , ρ 1 , n , ρ 2 , 3 , , ρ 2 , n , , ρ n 1 , n T . The unknown parameters are θ = ( θ 1 T , θ 2 T , θ 3 T ) T ;   θ 0 denotes the truth value of the unknown parameter. To construct the LMP test statistic, we make the following assumptions:
Suppose there exists a neighborhood U θ 0 of θ 0 and a positive real-valued function N such that
( C 1 )   0 < μ α i 2 + σ α i 2 < 1 , i = 1 , 2 , , n ;
( C 2 )   E | X i t | 4 < , i = 1 , 2 , , n ;
( C 3 )   E | α i t | 3 < , i = 1 , 2 , , n ;
( C 4 ) The function f z ( z 1 , t , z 2 , t , , z n , t ) has continuous partial derivatives up to the third order with respect to parameter θ , and the derivatives of both f z ( z 1 , t , z 2 , t , , z n , t ) and the logarithmic function log f z ( z 1 , t , z 2 , t , , z n , t ) are bounded by a common upper bound N .
Since X t t Z is a Markov chain, the transition probabilities can be expressed as follows:
f X t | X t 1 ( x t | x t 1 ) = f z ( x 1 , t α 1 , t x 1 , t 1 , x 2 , t α 2 , t x 2 , t 1 , , x n , t α n , t x n , t 1 ) d F α 1 d F α n .
Therefore, we can derive the likelihood function for the multivariate RCAR(1) model:
L ( θ ) = f X 0 ( x 0 ) t = 1 n f X t | X t 1 ( x t | x t 1 ) = f X 0 ( x 0 ) t = 1 n f z ( x 1 , t α 1 , t x 1 , t 1 , x 2 , t α 2 , t x 2 , t 1 , , x n , t α n , t x n , t 1 ) d F α 1 d F α n f X 0 ( x 0 ) t = 1 n f z ( z 1 , t , z 2 , t , , z n , t ) d F α 1 d F α n .
Now, the Taylor series expansion of L ( θ ) around ( μ α 1 , μ α 2 , , μ α n ) T , which is the mean of α 1 , t , α 2 , t , , α n , t , is given as follows:
L ( θ ) = f X 0 ( x 0 ) t = 1 n { f z ( z 1 , t , z 2 , t , , z n , t ) + ( h 1 f z μ α 1 + h 2 f z μ α 2 + + h n f z μ α n ) ( z 1 , t , z 2 , t , , z n , t ) + 1 2 ( h 1 f z μ α 1 + h 2 f z μ α 2 + + h n f z μ α n ) 2 ( z 1 , t , z 2 , t , , z n , t ) + 1 3 ! ( h 1 f z μ α 1 + h 2 f z μ α 2 + + h n f z μ α n ) 3 ( z 1 , t , z 2 , t , , z n , t ) + } d F α 1 d F α 2 d F α n
Thus, the log-likelihood function is
log L ( θ ) = t = 1 n log { f z ( z 1 , t , z 2 , t , , z n , t ) + ( h 1 f z μ α 1 + h 2 f z μ α 2 + + h n f z μ α n ) ( z 1 , t , z 2 , t , , z n , t ) + 1 2 ( h 1 f z μ α 1 + h 2 f z μ α 2 + + h n f z μ α n ) 2 ( z 1 , t , z 2 , t , , z n , t ) + 1 3 ! ( h 1 f z μ α 1 + h 2 f z μ α 2 + + h n f z μ α n ) 3 ( z 1 , t , z 2 , t , , z n , t ) + } d F α 1 d F α 2 d F α n
where h i = α i , t μ α i , i = 1 , 2 , , n . And we can conclude that P ( α i , t μ α i = 0 ) = 1 , if σ α i 2 = 0 . Therefore, the distribution function of random variable α i , t is degenerated. Additionally, we obtain
( α i , t μ α i ) k k ! d F α i = 0 , i = 1 , 2 , , n , k = 1 , 2 , 3 ,
Moreover, since α i , t , i = 1 , 2 , , n are i.i.d. random sequences and W i , j denotes the coefficients of the cross terms, which are independent of α i , t and α j , t , it follows that
( α i , t μ α i ) k ( α j , t μ α j ) l W i , j d F α 1 d F α 2 d F α n = 0 ,
i , j = 1 , 2 , , n , i j , k , l = 1 , 2 , 3 ,
The derivative of log L ( θ ) with respect to σ α i 2 at = 0 gives the following equation:
log L ( θ ) σ α i 2 | = 0 = t = 1 n log [ f z + 1 2 σ α 1 2 f z μ α 1 + + 1 2 σ α n 2 f z μ α n + f z μ α 1 ( α 1 t μ α 1 ) 3 3 ! d F α 1 d F α 2 d F α n + ] σ α i 2 | = 0 = t = 1 n log [ f z + 1 2 σ α 1 2 f z μ α 1 + + 1 2 σ α n 2 f z μ α n ] σ α i 2 | = 0 = t = 1 n 1 2 f z μ α i f z + 1 2 σ α 1 2 f z μ α 1 + + 1 2 σ α n 2 f z μ α n | = 0 = 1 2 t = 1 n f z μ α i ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) , i = 1 , 2 , , n .
The derivative of log L ( θ ) with respect to σ α i 2 at = 0 gives the following equation:
log L ( θ ) σ α 1 2 | = 0 log L ( θ ) σ α 2 2 | = 0 log L ( θ ) σ α n 2 | = 0 n × 1 = t = 1 n 1 2 f z μ α 1 ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) t = 1 n 1 2 f z μ α 2 ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) t = 1 n 1 2 f z μ α n ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) n × 1
Substituting the maximum likelihood estimate of parameter θ , denoted as θ ^ , into (4) yields the LMP test statistic.
T ( θ ^ ) = T n 1 ( θ ^ ) T n 2 ( θ ^ ) T n n ( θ ^ ) n × 1 = t = 1 n 1 2 f z μ α 1 ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) t = 1 n 1 2 f z μ α 2 ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) t = 1 n 1 2 f z μ α n ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) n × 1 = t = 1 n S t 1 ( θ ^ ) t = 1 n S t 2 ( θ ^ ) t = 1 n S t n ( θ ^ ) n × 1 .
where
T n i ( θ ^ ) = t = 1 n 1 2 f z μ α i ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) = t = 1 n S t i ( θ ^ ) , i = 1 , 2 , , n
and T ( θ ^ ) = ( T n 1 ( θ ^ ) , T n 2 ( θ ^ ) , , T n n ( θ ^ ) ) T , S t ( θ ^ ) = ( S t 1 ( θ ^ ) , S t 2 ( θ ^ ) , , S t n ( θ ^ ) ) T .
It can be easily derived that { S t i ( θ ^ ) , F t i s , i = 1 , 2 , , n } is a zero-mean martingale. Using Theorem 1.1 in Billingsley [21] and the Martingale Central Limit Theorem form Hall and Heyde [22], some lemmas are introduced.
Lemma 1.
We assume the conditions:
(A1) E | S t i ( θ ) | 2 + δ < , i = 1 , 2 , , n ; δ > 0 ;
(A2) lim n 1 n t = 1 n S t ( θ ) S t ( θ ) T a . s . V s ( θ ) , as n , we have
1 n t = 1 n S t ( θ ) d N ( 0 , V s ( θ ) ) .
where
V s = v i j n × n = V a r ( S t 1 ( θ ) ) C o v ( S t 1 ( θ ) , S t 2 ( θ ) ) C o v ( S t 1 ( θ ) , S t n ( θ ) ) C o v ( S t 2 ( θ ) , S t 1 ( θ ) ) V a r ( S t 2 ( θ ) ) C o v ( S t 2 ( θ ) , S t n ( θ ) ) C o v ( S t n ( θ ) , S t 1 ( θ ) ) C o v ( S t n ( θ ) , S t 2 ( θ ) ) V a r ( S t n ( θ ) ) n × n
v i i ( θ ) = 1 4 E 2 l t ( z 1 , t , z 2 , t , , z n , t ) μ α i 2 + l t ( z 1 , t , z 2 , t , , z n , t ) μ α i 2 2 , i = 1 , 2 , , n v j i ( θ ) = v i j ( θ ) = 1 4 E 2 l t ( z 1 , t , z 2 , t , , z n , t ) μ α i 2 + l t ( z 1 , t , z 2 , t , , z n , t ) μ α i 2 2 l t ( z 1 , t , z 2 , t , , z n , t ) μ α j 2 + l t ( z 1 , t , z 2 , t , , z n , t ) μ α j 2 , i j l t ( θ ) = l o g f z ( x 1 , t μ α 1 x 1 , t 1 , x 2 , t μ α 2 x 2 , t 1 , , x n , t μ α n x n , t 1 ) .
Lemma 2.
Under conditions of ( C 1 ) and ( C 2 ) , as n , we have
1 n t = 1 n l t ( θ ) θ l t ( θ ) θ T a . s . I ( θ ) ξ × ξ ; ξ = n 2 + 3 n 2
where
I i i = E l t ( z 1 , t , z 2 , t , , z n , t ) θ i 2 , i = 1 , , ξ . I i j = E l t ( z 1 , t , z 2 , t , , z n , t ) θ i · l t ( z 1 , t , z 2 , t , , z n , t ) θ j = I j , i , 1 i < j ξ .
Lemma 3.
Under conditions of ( C 1 ) and ( C 2 ) , as n , we have
1 n t = 1 n l t ( θ ) θ d N ( 0 ξ × 1 , I ( θ ) ξ × ξ ) .
where I ( θ ) is defined in Lemma 2 . Then, for the asymptotic distribution of ( T n 1 ( θ ^ ) , , T n n ( θ ^ ) ) T , we have the following theorem.
Theorem 1.
Under conditions of ( C 1 ) ( C 4 ) ; then, under H 0 , as n , we have
( n ) 1 ( T n 1 ( θ ^ ) , T n 2 ( θ ^ ) , , T n n ( θ ^ ) ) T d N ( 0 n × 1 , Γ ^ n × n ) ,
where Γ ^ = V s ( θ ^ ) J ( θ ^ ) I 1 ( θ ^ ) J ( θ ^ ) T , J ( θ ) = E [ S t ( θ ) / θ ] . V s ( θ ) and I ( θ ) is defined in Lemmas 1 and 2.
Based on Theorem 1, we obtain that the quadratic form of T n 1 ( θ ^ ) , T n 2 ( θ ^ ) , , T n n ( θ ^ ) which is a convergent distribution, and the non-degeneracy of Γ ^ is given in the proof of the theorem.

3. Simulation

In this section, we calculate the probability of accepting the null hypothesis when it is true and the probability of rejecting the null hypothesis when it is false at the significance level of α = 0.1 and α = 0.05 . Compare the LMP test and EL test from the perspectives of empirical sizes and empirical powers. And add the case where the error term obeys a mixed binary normal distribution. Within each study, we set the initial values X 0 = 0 , and the simulation results are based on 500 repetitions using R software version R-4.4.1.
On the one hand, we consider the simulation results when the null hypothesis is true. We take the bivariate first-order autoregressive (BAR(1)) model
X t = A X t 1 + Z t = α 1 0 0 α 2 X 1 , t 1 X 2 , t 1 + Z 1 , t Z 2 , t .
In order to calculate empirical sizes, we pay attention to two types defined as follows:
Model 1. Z t N ( 0 , G ) , where G = 1 0.6 0.6 9 .
Model 2. Z t = δ t Z t ( 1 ) + ( 1 δ t ) Z t ( 2 ) . where Z t ( 1 ) N ( 0 , G 1 ) , G 1 = 1 0.6 0.6 9 , Z t ( 2 ) N ( 0 , G 2 ) , G 2 = 4 1.2 1.2 9 . Furthermore, δ t follows a Bernoulli distribution with probability mass function indicated by P ( δ t = 1 ) = 1 P ( δ t = 0 ) = r .
The following section describes the process used to obtain the simulation results. First of all, when α 1 = 0.2 , 0.4 , 0.6 and α 2 = 0.1 , 0.2 , 0.3 , data are generated using Model 1. Secondly, when α 1 = 0.2 , 0.4 , α 2 = 0.1 , 0.2 , and r takes values of 0.9 and 0.8, data are generated using Model 2. For Model 1 and Model 2, a sample size of 100, 300, 500, 700, 1000, 2000, and 5000 is taken. Table 1, Table 2, Table 3 and Table 4 show the empirical sizes for each of the two models. The tables illustrates that the LMP test yields similar outcomes across various error types, including normal error and contamination error. From the results, it can be observed that when the sample size is small, such as N = 100, the outcomes of both tests are suboptimal. In comparison to the EL test, the LMP test produces empirical sizes that are closer to the nominal significance level, particularly with larger sample sizes. And in the presence of contamination errors, the LMP test generally outperforms the EL test. Furthermore, the empirical sizes obtained from the EL test are higher than those from the LMP test at each sample size. Therefore, compared to the LMP test, the EL test is more likely to make incorrect decisions in hypothesis testing. Based on the results of the two models, it can be concluded that the LMP test is more effective than the EL test.
On the other hand, we consider the simulation results when the alternative hypothesis is true. We take the bivariate first-order random coefficient autoregressive (BRCAR(1)) model
X t = A t X t 1 + Z t = α 1 , t 0 0 α 2 , t X 1 , t 1 X 2 , t 1 + Z 1 , t Z 2 , t .
Similarly, to calculate empirical powers, we focus on three types as defined below:
Model 3. α 1 , t N ( μ α 1 , σ α 1 2 ) ,   α 2 , t N ( μ 2 , σ α 2 2 ) ,   Z t N ( 0 , G ) , where G = 1 0.6 0.6 9 .
Model 4. α 1 , t B e t a a 1 , b 1 ,   α 2 , t B e t a a 2 , b 2 ,   Z t N ( 0 , G ) , where G = 1 0.6 0.6 9 .
Model 5. α 1 , t N ( μ α 1 , σ α 1 2 ) ,   α 2 , t N ( μ α 2 , σ α 2 2 ) ,   Z t = δ t Z t ( 1 ) + ( 1 δ t ) Z t ( 2 ) . where Z t ( 1 ) N ( 0 , G 1 ) , G 1 = 1 0.6 0.6 9 , Z t ( 2 ) N ( 0 , G 2 ) , G 2 = 4 1.2 1.2 9 . Furthermore, δ t follows a Bernoulli distribution with probability mass function indicated by P ( δ t = 1 ) = 1 P ( δ t = 0 ) = r .
Firstly, in order to calculate the empirical powers of the LMP test and EL test, we generate samples from Model 3 where parameters μ α 1 = μ α 2 = 0.1 ,   σ α 1 = 0.3 , 0.36 , 0.4 , 0.46 , and σ α 2 = 0.3 , 0.4 . Secondly, we generate samples from Model 4 where parameters a 1 , b 1 = 0.5 , 0.9 and a 2 , b 2 = ( 0.5 , 0.6 ) , ( 0.5 , 0.9 ) , ( 0.5 , 1.2 ) , ( 0.5 , 1.5 ) . Third, we generate samples from Model 5 where parameters μ α 1 = μ α 2 = 0.1 ,   σ α 1 = 0.36 , 0.46 , and σ α 2 = 0.3 , 0.4 ,   r = 0.9 , 0.8 . For Model 3, Model 4, and Model 5, we take the sample size N=100, 300, 500, 700, 1000, 2000, 5000. Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 present a summary of the empirical powers for both models. From the results, it can be seen that both the EL test and LMP test perform poorly with small sample sizes. However, as the sample size increases, the empirical powers of both the LMP test and the EL test increase. Particularly, when the sample size reaches 5000, the empirical powers of both tests are close to 1. In summary, through these simulations, we conclude that under conditions of normal, Beta-distributed, and contaminated errors, the accuracy of using the LMP test is higher than the EL test by 10.2%, 10.1%, and 30.9%, respectively. As expected, our results show that the LMP test is an effective tool for detecting changes in the parameters of the Multivariate RCAR(1) model. In addition, the efficacy of the LMP test increases progressively as the parameter variances σ α 1 2 , σ α 2 2 increase. Thus, it shows that the LMP test is feasible in practical applications. Overall, whether the null hypothesis is true or the alternative hypothesis is true, the LMP test will make a correct judgment with a high probability and is a robust test.

4. Applications

This section demonstrates the application of the LMP method in two practical examples.

4.1. Economic Data

The data represent the chain rate of the growth of building construction and the chain rate of the growth of industrial (excluding construction) production in Germany from May 1988 to April 1993, respectively. These data are available on the website: https://ceidata.cei.cn/new, accessed on 3 April 2024. The data of the chain growth rate of building construction are denoted as x 1 , x 2 , , x 60 , and the data of the chain rate of the growth of industrial (excluding construction) production are denoted as y 1 , y 2 , , y 60 . We observe that E x t = 0.0645 ;   E y t = 0.619333 ;   V a r x t = 2.996374 ;   V a r y t = 32.1281 ; ρ x t y t = 0.23279 .  Figure 1 and Figure 2 present a sample path diagram and sample centering series for the two data sets.
After transforming these two data sets, the resulting data with X t = x t E ( x t ) and Y t = y t E ( y t ) can be modeled using a BAR model. In addition, the plots of the autocorrelation function (ACF) and partial autocorrelation function (PACF) of sequences X t and Y t are given in Figure 3 and Figure 4, respectively. From Figure 4, it can be seen that these two sets of data come from a binary first-order autoregressive process. Therefore, it is reasonable to model the data with BAR(1), and it also shows that the proposed test statistic is valid for both sets of actual data. From Figure 1 and Figure 2 , it is unknown whether the parameters are constant or not; therefore, we consider the following test problem with the help of the proposed test statistic T θ :
H 0 : = σ α 1 2 + σ α 2 2 = 0 H 1 : = σ α 1 2 + σ α 2 2 > 0 .
The test statistic value in (5) is 33.21573 with a p-value of 0.00104, which is less than 0.05. This result indicates that we are rejecting the null hypothesis at the 5 % significance level. Therefore, the BRCAR(1) model is more appropriate than the BAR(1) model in this case.

4.2. Import and Export Data

The data represent the month-on-month growth rates of Japan’s export volume and import volume, respectively, from November 2012 to February 2020 (available at https://ceidata.cei.cn/new, accessed on 3 April 2024). The data of the month-on-month chain growth rate of exports are denoted as x 1 , x 2 , , x 88 , and the month-on-month chain growth rate of imports are denoted as y 1 , y 2 , , y 88 . We observe that E x t = 0.04034 ; E y t = 0.23352 ; V a r x t = 9.9499 ;   V a r y t = 15.80747 ; ρ x t y t = 0.222839 .  Figure 5 and Figure 6 shows a sample path diagram and sample centering series for the two data sets.
Similarly, the data with X t = x t E ( x t ) and Y t = y t E ( y t ) can be modeled using a BAR model. Furthermore, the plots of the A C F and the P A C F of sequences X t and Y t are given in Figure 7 and Figure 8, respectively. From Figure 8, we can see that these two data sets come from a binary first-order autoregressive process. From the time series plot given in Figure 5 and Figure 6, the randomness of the parameter may be questioned; consequently, it is important to test the hypothesis of the randomness of the parameter. Similarly, we consider the following test problem with the help of the proposed test statistic T θ :
H 0 : = σ α 1 2 + σ α 2 2 = 0 H 1 : = σ α 1 2 + σ α 2 2 > 0 .
The test statistic value in (5) is 6.905108 with a p-value of 0.03166, which is less than 0.05. Consequently, we are rejecting the null hypothesis at the 5 % significance level. Thus, modeling these data with the BRCAR(1) model is preferred over the BAR(1) model.

5. Conclusions

This paper proposes a locally most powerful-type test for testing the constancy of the random coefficient parameters in multivariate autoregressive models. We derive the test statistic and its limiting distribution under the null hypothesis. In practical applications, the coefficients of autoregressive models may not remain constant throughout the entire time period. Therefore, it is necessary to conduct such tests whenever there is a possibility of random variation in the coefficients. Through simulation studies and an analysis of real data, we demonstrate the effectiveness of our testing method and show that its performance surpasses that of competing methods. Finally, we anticipate that our LMP test can be extended and applied to other similar types of time series models.

Author Contributions

Writing—original draft, L.B., D.W., L.C. and D.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is supported by the National Nature Science Foundation of China under Grant (No. 12171054).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to acknowledge the contributions of all the reviewers and thank them for their insightful comments on the early drafts of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Lemma 1.
Let F n i s = σ ( S 0 , i , S 1 , i , , S n , i ) be the σ field generated by the set { S 0 , i , S 1 , i , , S n , i } , i = 1 , , n ; we have
T n , 1 = t = 1 n S t , 1 ( θ ) , T n , 2 = t = 1 n S t , 2 ( θ ) , , T n , n = t = 1 n S t , n ( θ ) .
By calculation, we can obtain
E [ S n , 1 ( θ ) | F n 1 , 1 s ] = E 1 2 f z ( x 1 , t μ α 1 x 1 , t 1 , , x n , t μ α n x n , t 1 ) f z ( x 1 , t μ α 1 x 1 , t 1 , , x n , t μ α n x n , t 1 ) | F n 1 , 1 s = 0 , E ( T n , 1 ( θ ) | F n 1 , 1 s ) = E ( T n 1 , 1 + S n , 1 ( θ ) | F n 1 , 1 s ) = T n 1 , 1 + E ( S n , 1 ( θ ) | F n 1 , 1 s ) = T n 1 , 1 .
So, { T n , 1 , F n 1 , 1 s , n 1 } is a martingale sequence. Since E | S t , i | 2 < , { X t } is a strictly stationary ergodic process. By the ergodic properties, and the application of Zhang et al. [23] in Theorem 1.1 in Billingsley [21], we can obtain
1 n t = 1 n S t , 1 2 ( θ ) a . s . E [ S t , 1 2 ( θ ) ] = V a r [ S t , 1 ( θ ) ] = v 11 .
Hence, we employ the Martingale Central Limit Theorem form Hall and Heyde [22]. As n , we obtain
1 n t = 1 n S t , 1 ( θ ) d N ( 0 , v 11 ) .
Similarly, we can verify that { T n , i , F n 1 , i s , n 1 } , i = 2 , , n is a martingale sequence and the following equation holds:
1 n t = 1 n S t , i ( θ ) d N ( 0 , v i i ) , i = 2 , , n .
For any vector c = ( c 1 , c 2 , , c n ) T R 2 ( 0 , 0 , , 0 ) T , we have
1 n c T T n , 1 ( θ ) T n , 2 ( θ ) T n , n ( θ ) n × 1 = 1 n t = 1 n ( c 1 S t , 1 ( θ ) + c 2 S t , 2 ( θ ) + + c n S t , n ( θ ) ) d N ( 0 , E ( c 1 S t , 1 ( θ ) + c 2 S t , 2 ( θ ) + + c n S t , n ( θ ) ) ) .
By the Cram e ´ r - Wold device, we obtain
1 n T n , 1 ( θ ) T n , 2 ( θ ) T n , n ( θ ) n × 1 = 1 n t = 1 n S t ( θ ) d N ( 0 , V s ( θ ) ) .
This completes the proof. □
Proof of Lemma 2.
By Theorem 1.1 in Billingsley [21], we know that
1 n t = 1 n l t θ θ l t θ θ T 1 n t = 1 n l t μ α 1 · l t μ α 1 1 n t = 1 n l t μ α 1 · l t μ α n 1 n t = 1 n l t μ α 1 · l t ρ n 1 , n 1 n t = 1 n l t μ α 2 · l t μ α 1 1 n t = 1 n l t μ α 2 · l t μ α n 1 n t = 1 n l t μ α 2 · l t ρ n 1 , n 1 n t = 1 n l t μ α n · l t μ α 1 1 n t = 1 n l t μ α n · l t μ α n 1 n t = 1 n l t μ α n · l t ρ n 1 , n 1 n t = 1 n l t σ Z 1 2 · l t μ α 1 1 n t = 1 n l t σ Z 1 2 · l t μ α n 1 n t = 1 n l t σ Z 1 2 · l t ρ n 1 , n 1 n t = 1 n l t σ Z n 2 · l t μ α 1 1 n t = 1 n l t σ Z n 2 · l t μ α n 1 n t = 1 n l t σ Z n 2 · l t ρ n 1 , n 1 n t = 1 n l t ρ 1 , 2 · l t μ α 1 1 n t = 1 n l t ρ 1 , 2 · l t μ α n 1 n t = 1 n l t ρ 1 , 2 · l t ρ n 1 , n 1 n t = 1 n l t ρ n 1 , n · l t μ α 1 1 n t = 1 n l t ρ n 1 , n · l t μ α n 1 n t = 1 n l t ρ n 1 , n · l t ρ n 1 , n ξ × ξ
a . s . E l t μ α 1 · l t μ α 1 E l t μ α 1 · l t μ α n E l t μ α 1 · l t ρ n 1 , n E l t μ α 2 · l t μ α 1 E l t μ α 2 · l t μ α n E l t μ α 2 · l t ρ n 1 , n E l t μ α n · l t μ α 1 E l t μ α n · l t μ α n E l t μ α n · l t ρ n 1 , n E l t σ Z 1 2 · l t μ α 1 E l t σ Z 1 2 · l t μ α n E l t σ Z 1 2 · l t ρ n 1 , n E l t σ Z n 2 · l t μ α 1 E l t σ Z n 2 · l t μ α n E l t σ Z n 2 · l t ρ n 1 , n E l t ρ 1 , 2 · l t μ α 1 E l t ρ 1 , 2 · l t μ α n E l t ρ 1 , 2 · l t ρ n 1 , n E l t ρ n 1 , n · l t μ α 1 E l t ρ n 1 , n · l t μ α n E l t ρ n 1 , n · l t ρ n 1 , n ξ × ξ
This completes the proof. □
Proof of Lemma 3.
Let X i , 0 , X i , 1 , , X i , n , i = 1 , 2 , , n be the σ field generated by the set F i n = σ ( X i , 0 , X i , 1 , , X i , n ) , and G i , 0 = 0 ,   G i , n = t = 1 n f z μ α i ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) , i = 1 , 2 , , n . Then, we have
E ( G i , n | F i , n 1 ) = E G i , n 1 + μ α i f z ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) | F i , n 1 = G i , n 1 + E μ α i f z ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) | F i , n 1 = G i , n 1 .
Thus, { G i , n , F i , n 1 , n 1 } i = 1 , 2 , , n is a martingale sequence. Since E | X i , t | 2 < , i = 1 , 2 , , n and { X t } is a strictly stationary ergodic process, as n , we have
1 n t = 1 n l t ( z 1 , t , , z n , t ) μ α i 2 a . s . σ G i 2 , i = 1 , 2 , , n .
Therefore, we employ the Martingale Central Limit Theorem from Hall and Heyde [22]; as n , we obtain
1 n G i , n d N ( 0 , σ G i 2 ) , i = 1 , 2 , , n .
Similarly, let M i , n = log L ( θ ) σ z i 2 = t = 1 n σ z i 2 f z ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) , i = 1 , 2 , , n .
It is easy to verify that { M i , n , F i , n 1 , n 1 } , i = 1 , 2 , , n is a martingale. Because E | X i , t | 2 < , i = 1 , 2 , , n by Theorem 1.1 in Billingsley [21], as n , we have that
1 n t = 1 n l t ( z 1 , t , , z n , t ) σ z i 2 2 a . s . σ M i 2 , i = 1 , 2 , , n .
By the Martingale Central Limit theorem (Hall and Heyde [22]), as n , we obtain
1 n M i , n d N ( 0 , σ M i 2 ) , i = 1 , 2 , , n .
Similarly,
Q r , n ( n 1 ) 2 = log L ( θ ) ρ i , j = t = 1 n ρ i , j f z ( z 1 , t , z 2 , t , , z n , t ) f z ( z 1 , t , z 2 , t , , z n , t ) ,
where r = 1 , 2 , , n ( n 1 ) 2 ; i , j = 1 , 2 , , n , ( i j ) . Thus, { Q r , n ( n 1 ) 2 , F r , n 2 3 n 2 , n 1 } , r = 1 , 2 , , n ( n 1 ) 2 is a martingale. by Theorem 1.1 in Billingsley [21] and the Martingale Central Limit theorem (Hall and Heyde [22]), as n , we obtain
1 n Q r , n ( n 1 ) 2 d N ( 0 , σ Q r 2 ) , r = 1 , 2 , , n ( n 1 ) 2 .
In the same way, for any vector c = ( c 1 , c 2 , c 3 , c 4 , c 5 , , c ξ ) T R ξ + 1 ( 0 , , 0 ) T , let G ξ = ( G 1 , n , G 2 , n , , G n , n , M 1 , n , M 2 , n , , M n , n , Q 1 , n ( n 1 ) 2 , Q 2 , n ( n 1 ) 2 , , Q n ( n 1 ) 2 , n ( n 1 ) 2 ) T ; then, { c T G ξ , F n 1 , n 1 } is a martingale squence. By the ergodic theorem, as n , we can prove that
1 n t = 1 n 1 f z ( c 1 f z μ α 1 + + c n f z μ α n + c n + 1 f z σ z 1 2 + + c 2 n f z σ z n 2 + c 2 n + 1 f z ρ 1 , 2 + + c ξ f z ρ n 1 , n ) 2 a . s . σ 2 ,
where
σ 2 = E 1 f z ( c 1 f z μ α 1 + + c n f z μ α n + c n + 1 f z σ z 1 2 + + c 2 n f z σ z n 2 + c 2 n + 1 f z ρ 1 , 2 + + c ξ f z ρ n 1 , n ) 2 .
Note that
1 n G 1 , n G n , n M 1 , n M n , n Q 1 , n ( n 1 ) 2 Q n ( n 1 ) 2 , n ( n 1 ) 2 ξ × 1 = 1 n t = 1 n l t ( θ ) θ d N ( 0 ξ × 1 , I ( θ ) ξ × ξ ) .
By the Cram e ´ r Wold device, Lemma 3 is established. □
Proof of Theorem 1.
Under the null hypothesis, the log-likelihood function is
log L ( θ ) = t = 1 n l t ( θ ) ,
where l t ( θ ) = log f z ( X t | X t 1 ) = log f z ( z 1 , t , z 2 , t , , z n , t ) .
The conditional maximum likelihood estimate θ ^ of parameter θ can be obtained by maximizing the log-likelihood function log L ( θ ) . Applying the Taylor series expansion, we have
0 = 1 n t = 1 n l t ( θ ^ ) θ = 1 n t = 1 n l t ( θ 0 ) θ + 1 n t = 1 n 2 l t ( θ * ) θ θ T n ( θ ^ θ 0 ) .
where θ * is between θ 0 and θ ^ because θ ^ p θ 0 . Therefore, θ * p θ 0 . The expression can be simplified to obtain
n ( θ ^ θ 0 ) = 1 n t = 1 n 2 l t ( θ * ) θ θ T 1 1 n t = 1 n l t ( θ 0 ) θ .
Performing a Taylor expansion of S t ( θ ^ ) to the third term at a truth value of θ 0 , we can obtain
1 n t = 1 n S t ( θ ^ ) = 1 n t = 1 n S t ( θ 0 ) + 1 n t = 1 n S t ( θ 0 ) ( θ ^ θ ) + O p ( 1 ) ,
where S t ( θ ) = ( S t 1 θ , , S t n θ ) T . Because of θ ^ p θ 0 , we can prove that S t ( θ 0 ) = O p ( n ) , so the remainder term is O p ( 1 ) . Therefore, the second term in (A2) can be written as
1 n t = 1 n S t ( θ ) ( θ ^ θ 0 ) = 1 n t = 1 n S t ( θ ) 1 n t = 1 n 2 l t ( θ * ) θ θ T 1 1 n t = 1 n l t ( θ 0 ) θ .
Note that under the null hypothesis, it holds that
1 n t = 1 n S t ( θ ) θ = 1 n t = 1 n θ l t θ σ α 1 2 , l t θ σ α 2 2 , , l t θ σ α n 2 T = 1 n t = 1 n 2 l t μ α 1 σ α 1 2 1 n t = 1 n 2 l t μ α 1 σ α 2 2 1 n t = 1 n 2 l t μ α 1 σ α n 2 1 n t = 1 n 2 l t μ α 2 σ α 1 2 1 n t = 1 n 2 l t μ α 2 σ α 2 2 1 n t = 1 n 2 l t μ α 2 σ α n 2 1 n t = 1 n 2 l t μ α n σ α 1 2 1 n t = 1 n 2 l t μ α n σ α 2 2 1 n t = 1 n 2 l t μ α n σ α n 2 1 n t = 1 n 2 l t σ Z 1 2 σ α 1 2 1 n t = 1 n 2 l t σ Z 1 2 σ α 2 2 1 n t = 1 n 2 l t σ Z 1 2 σ α n 2 1 n t = 1 n 2 l t σ Z n 2 σ α 1 2 1 n t = 1 n 2 l t σ Z n 2 σ α 2 2 1 n t = 1 n 2 l t σ Z n 2 σ α n 2 1 n t = 1 n 2 l t ρ 1 , 2 σ α 1 2 1 n t = 1 n 2 l t ρ 1 , 2 σ α 2 2 1 n t = 1 n 2 l t ρ 1 , 2 σ α n 2 1 n t = 1 n 2 l t ρ n 1 , n σ α 1 2 1 n t = 1 n 2 l t ρ n 1 , n σ α 2 2 1 n t = 1 n 2 l t ρ n 1 , n σ α n 2 ξ × n T n E 2 l t μ α 1 σ α 1 2 E 2 l t μ α 1 σ α 2 2 E 2 l t μ α 1 σ α n 2 E 2 l t μ α 2 σ α 1 2 E 2 l t μ α 2 σ α 2 2 E 2 l t μ α 2 σ α n 2 E 2 l t μ α n σ α 1 2 E 2 l t μ α n σ α 2 2 E 2 l t μ α n σ α n 2 E 2 l t σ Z 1 2 σ α 1 2 E 2 l t σ Z 1 2 σ α 2 2 E 2 l t σ Z 1 2 σ α n 2 E 2 l t σ Z n 2 σ α 1 2 E 2 l t σ Z n 2 σ α 2 2 E 2 l t σ Z n 2 σ α n 2 E 2 l t ρ 1 , 2 σ α 1 2 E 2 l t ρ 1 , 2 σ α 2 2 E 2 l t ρ 1 , 2 σ α n 2 E 2 l t ρ n 1 , n σ α 1 2 E 2 l t ρ n 1 , n σ α 2 2 E 2 l t ρ n 1 , n σ α n 2 ξ × n T
= E l t μ α 1 . l t σ α 1 2 E l t μ α 1 . l t σ α 2 2 E l t μ α 1 . l t σ α n 2 E l t μ α 2 . l t σ α 1 2 E l t μ α 2 . l t σ α 2 2 E l t μ α 2 . l t σ α n 2 E l t μ α n . l t σ α 1 2 E l t μ α n . l t σ α 2 2 E l t μ α n . l t σ α n 2 E l t σ Z 1 2 . l t σ α 1 2 E l t σ Z 1 2 . l t σ α 2 2 E l t σ Z 1 2 . l t σ α n 2 E l t σ Z n 2 . l t σ α 1 2 E l t σ Z n 2 . l t σ α 2 2 E l t σ Z n 2 . l t σ α n 2 E l t ρ 1 , 2 . l t σ α 1 2 E l t ρ 1 , 2 . l t σ α 2 2 E l t ρ 1 , 2 . l t σ α n 2 E l t ρ n 1 , n . l t σ α 1 2 E l t ρ n 1 , n . l t σ α 2 2 E l t ρ n 1 , n . l t σ α n 2 ξ × n T J θ T
Thus, Equation (A3) is equivalent to
J ( θ ) I 1 ( θ ) 1 n t = 1 n l t ( θ ) θ .
So, Equation (A2) can be written as
1 n t = 1 n S t ( θ ^ ) = 1 n t = 1 n S t ( θ 0 ) D ( θ ) I 1 ( θ ) 1 n t = 1 n l t ( θ ) θ .
For any vector c = ( c n T , c ξ T ) T R n + ξ ( 0 , , 0 ) T , where c n and c ξ are column vectors of dimensions n and ξ , respectively. By Lemmas 1 and 3 we know that { t = 1 n S t i ( θ ) , F n 1 n , n 1 } , i = 1 , 2 , , n and { t = 1 n l t ( θ ) θ , F n 1 ξ , n 1 } are martingale. By the ergodic theorem, as n , we can deduce that
1 n c T t = 1 n S t ( θ ) t = 1 n l t ( θ ) θ = 1 n t = 1 n ( c n T S t ( θ ) + c ξ T l t ( θ ) θ ) d N ( 0 , E [ c n T S t ( θ ) + c ξ T l t ( θ ) θ ] 2 )
By the Cram e ´ r Wold device, we obtain
1 n c T t = 1 n S t ( θ ) t = 1 n l t ( θ ) θ d N 0 0 , V s ( θ ^ ) n × n J ( θ ^ ) n × ξ J ( θ ^ ) ξ × n T I ( θ ^ ) ξ × ξ .
where ( 0 , 0 ) T is a column vector of dimension n + ξ . Then, we can make the conclusion that
1 n T ( θ ^ ) = 1 n t = 1 n S t ( θ ^ ) n × n d N ( 0 n × 1 , Γ ^ n × n ) .
where Γ ^ = V s ( θ ^ ) J ( θ ^ ) T I ( θ ^ ) 1 J ( θ ^ ) , and Γ ^ is degenerated.
Furthermore, we can derive that
n 1 T ( θ ^ ) 1 × n T Γ ^ n × n 1 T ( θ ^ ) n × 1 d χ 2 ( n ) , n .
where χ 2 ( n ) is a chi-squared distribution with n degrees of freedom.

References

  1. George, U. Vii. on a method of investigating periodicities disturbed series, with special reference to wolfer’s sunspot numbers. Philos. Trans. R. Soc. Lond. 1927, 226, 267–298. [Google Scholar] [CrossRef]
  2. George, E.B.; Gwilym, M.J.; Gregory, C.R.; Greta, M.L. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Washington, DC, USA, 2015. [Google Scholar]
  3. Mohsen, M.; Nematollahi, A.R. Autoregressive models with mixture of scale mixtures of gaussian innovations. Iran. J. Sci. Technol. Trans. Sci. 2017, 41, 1099–1107. [Google Scholar] [CrossRef]
  4. Dmitry, P. Short-term traffic forecasting using multivariate autoregressive models. Procedia Eng. 2017, 178, 57–66. [Google Scholar] [CrossRef]
  5. Matthieu, G.; Adrià Tauste, C.; Xing, C.; Alexander, T.; Gustavo, D. Nonparametric test for connectivity detection in multivariate autoregressive networks and application to multiunit activity data. Netw. Neurosci. 2017, 1, 357–380. [Google Scholar] [CrossRef]
  6. Jiří, A. Autoregressive series with random parameters. Math. Oper. Und Stat. 1976, 7, 735–741. [Google Scholar] [CrossRef]
  7. Paul, D.F.; Richard, L.T. Random coefficient autoregressive processes: A markov chain analysis of stationarity and finiteness of moments. J. Time Ser. Anal. 1985, 6, 1–14. [Google Scholar] [CrossRef]
  8. Alexander, A.; Lajos, H.; Josef, S. Estimation in random coefficient autoregressive models. J. Time Ser. Anal. 2006, 27, 61–76. [Google Scholar] [CrossRef]
  9. Alexander, A.; Lajos, H. Quasi-likelihood estimation in stationary and nonstationary autoregressive models with random coefficients. Stat. Sin. 2011, 21, 973–999. [Google Scholar] [CrossRef]
  10. Holger, D.; Dominik, W. Detecting relevant changes in time series models. J. R. Stat. Soc. Ser. Stat. Methodol. 2016, 78, 371–394. [Google Scholar] [CrossRef]
  11. Lajos, H.; Lorenzo, T. Testing for randomness in a random coefficient autoregression model. J. Econom. 2019, 209, 338–352. [Google Scholar] [CrossRef]
  12. Li, B.; Feilong, L.; Kai, Y.; Dehui, W. Locally most powerful test for the random coefficient autoregressive model. Math. Probl. Eng. 2019, 1, 1099–1107. [Google Scholar] [CrossRef]
  13. Dazhe, W.; Sujit, K.G.; Sastry, G.P. Maximum likelihood estimation and unit root test for first order random coefficient autoregressive models. J. Stat. Theory Pract. 2010, 4, 261–278. [Google Scholar] [CrossRef]
  14. Jin, C.; Dehui, W.; Cong, L.; Jingwen, H. Estimation and testing of multivariate random coefficient autoregressive model based on empirical likelihood. Commun.-Stat.-Simul. Comput. 2023, 52, 291–308. [Google Scholar] [CrossRef]
  15. Zuzana, P.; Pavel, V. On a class of estimators in a multivariate rca (1) model. Kybernetika 2011, 47, 501–551. [Google Scholar] [CrossRef]
  16. Pavel, V. Rate of convergence for a class of rca estimators. Kybernetika 2006, 42, 699–709. [Google Scholar] [CrossRef]
  17. Des, F.N.; Barry, G.Q. Random COEFFICIENT Autoregressive Models: An Introduction; Springer Science & Business Media: New York, NY, USA, 1982. [Google Scholar]
  18. Awale, M.; Balakrishna, N.; Ramanathan, T.V. Testing the constancy of the thinning parameter in a random coefficient integer autoregressive model. Stat. Pap. 2019, 60, 1515–1539. [Google Scholar] [CrossRef]
  19. Chikkagoudar, M.S.; Biradar, B.S. Locally most powerful rank tests for comparison of two failure rates based on multiple type-ii censored data. Commun.-Stat.-Theory Methods 2012, 41, 4315–4331. [Google Scholar] [CrossRef]
  20. Rohatgi, V.K.; Md Ehsanes Sleh, A.K.; Ahluwalia, R.; Ji, P. Locally most powerful tests for the two sample problem when the combined sample is type ii censored. Commun.-Stat.-Theory Methods 1990, 19, 2337–2355. [Google Scholar] [CrossRef]
  21. Patrick, B. The lindeberg-levy theorem for martingales. Proc. Am. Math. Soc. 1961, 12, 788–792. [Google Scholar] [CrossRef]
  22. Peter, H.; Christopher, C.H. Martingale Limit Theory and Its Application; Academic Press: Washington, DC, USA, 2014. [Google Scholar]
  23. Zhang, H.; Wang, D.; Zhu, F. The empirical likelihood for first-order random coeffi cient integer-valued autoregressive processes. Commun. Stat. Theory Methods 2011, 40, 492–509. [Google Scholar] [CrossRef]
Figure 1. (a): The sample path of series x t . (b): The sample plot of centering series X t for the real data.
Figure 1. (a): The sample path of series x t . (b): The sample plot of centering series X t for the real data.
Mathematics 12 02455 g001
Figure 2. (a): The sample path of series y t . (b): The sample plot of centering series Y t for the real data.
Figure 2. (a): The sample path of series y t . (b): The sample plot of centering series Y t for the real data.
Mathematics 12 02455 g002
Figure 3. (a): The sample ACF plot of X t . (b): The sample PACF plot of X t .
Figure 3. (a): The sample ACF plot of X t . (b): The sample PACF plot of X t .
Mathematics 12 02455 g003
Figure 4. (a): The sample ACF plot of Y t . (b): The sample PACF plot of Y t .
Figure 4. (a): The sample ACF plot of Y t . (b): The sample PACF plot of Y t .
Mathematics 12 02455 g004
Figure 5. (a): The sample path of series x t . (b): The sample plot of centering series X t for the real data.
Figure 5. (a): The sample path of series x t . (b): The sample plot of centering series X t for the real data.
Mathematics 12 02455 g005
Figure 6. (a): The sample path of series y t . (b): The sample plot of centering series Y t for the real data.
Figure 6. (a): The sample path of series y t . (b): The sample plot of centering series Y t for the real data.
Mathematics 12 02455 g006
Figure 7. (a): The sample ACF plot of X t . (b): The sample PACF plot of X t .
Figure 7. (a): The sample ACF plot of X t . (b): The sample PACF plot of X t .
Mathematics 12 02455 g007
Figure 8. (a): The sample ACF plots of Y t . (b): The sample PACF plot of Y t .
Figure 8. (a): The sample ACF plots of Y t . (b): The sample PACF plot of Y t .
Mathematics 12 02455 g008
Table 1. Empirical sizes of the LMP and EL tests at the significance level of 0.1 for Model 1.
Table 1. Empirical sizes of the LMP and EL tests at the significance level of 0.1 for Model 1.
N
( α 1 α 2 )100300500700100020005000
LMP(0.2, 0.1)0.2640.1660.1680.1420.1380.1250.106
EL0.4460.3440.3340.3620.3180.3660.328
LMP(0.4, 0.1)0.2900.1800.1720.1260.1460.1240.092
EL0.4340.3420.3300.3560.3540.3000.312
LMP(0.6, 0.1)0.2820.1540.1680.1560.1280.1140.106
EL0.4620.3520.3560.3460.3420.3420.306
LMP(0.2, 0.2)0.2660.1740.1420.1540.1460.2900.110
EL0.4460.3560.3200.3380.3400.4340.334
LMP(0.4, 0.2)0.2680.1800.1540.1560.1180.1120.094
EL0.4100.3320.3020.3460.3500.3260.346
LMP(0.6, 0.2)0.2600.1900.1580.1540.1400.1280.094
EL0.4340.3600.2980.3180.3660.3220.324
LMP(0.2, 0.3)0.2460.2200.1880.1640.1440.1200.112
EL0.4520.3500.3180.3640.3400.3460.318
LMP(0.4, 0.3)0.2800.1840.1500.1580.1060.1360.122
EL0.4640.3440.3520.3560.3480.3600.332
LMP(0.6, 0.3)0.2800.2020.1720.1440.1380.1080.118
EL0.4000.3780.3340.3320.3200.2960.304
Table 2. Empirical sizes of the LMP and EL tests at the significance level of 0.1 for Model 2.
Table 2. Empirical sizes of the LMP and EL tests at the significance level of 0.1 for Model 2.
N
( α 1 α 2 )100300500700100020005000
r = 0.8
LMP(0.2, 0.1)0.3540.3020.2620.2840.2780.2040.182
EL0.4980.3240.3500.3220.3140.3060.276
LMP(0.4, 0.1)0.3460.3320.2640.3080.2400.2260.208
EL0.4580.3560.3100.3360.3360.3140.310
LMP(0.2, 0.2)0.3840.3140.2720.2480.2460.2200.158
EL0.4620.3340.3580.3560.3420.3520.332
LMP(0.4, 0.2)0.3640.3040.3140.3280.2260.2320.174
EL0.4900.3540.3160.3180.2840.3100.348
r = 0.9
LMP(0.2, 0.1)0.3100.2740.2480.2420.2160.1660.184
EL0.4600.3580.3260.3360.3540.2980.324
LMP(0.4, 0.1)0.3020.2980.2580.2570.2400.2360.194
EL0.4400.3640.3480.3280.3620.3540.358
LMP(0.2, 0.2)0.3440.2480.2660.2380.2200.1980.184
EL0.4960.4020.3280.3220.3860.3140.342
LMP(0.4, 0.2)0.3340.2640.2700.2540.2540.2100.164
EL0.4720.3460.3540.3640.3460.3720.302
Table 3. Empirical sizes of the LMP and EL tests at the significance level of 0.05 for Model 1.
Table 3. Empirical sizes of the LMP and EL tests at the significance level of 0.05 for Model 1.
N
( α 1 α 2 )100300500700100020005000
LMP(0.2, 0.1)0.1780.1100.0980.0800.0820.0900.066
EL0.3340.2340.2140.2360.1900.2400.194
LMP(0.4, 0.1)0.1940.1160.1040.0780.0840.0680.048
EL0.2940.2400.2120.2100.2240.1620.200
LMP(0.6, 0.1)0.2180.0880.1000.1080.0780.0700.054
EL0.3440.2200.2040.2340.2260.2140.198
LMP(0.2, 0.2)0.1940.1080.0900.0800.0820.1940.052
EL0.3260.2480.1900.2120.2140.2940.202
LMP(0.4, 0.2)0.1900.1320.0940.0960.0700.0640.052
EL0.2960.2320.1780.2280.2000.2060.214
LMP(0.6, 0.2)0.1660.1200.0960.0900.0800.0720.066
EL0.3220.2420.2020.2020.2320.1940.200
LMP(0.2, 0.3)0.1640.1300.1100.0940.0840.0560.054
EL0.3220.2140.2100.2220.2200.2060.186
LMP(0.4, 0.3)0.1180.1140.0880.1100.0560.0720.052
EL0.3480.2220.2240.2220.2080.2080.186
LMP(0.6, 0.3)0.2060.1420.0940.1040.0840.0640.056
EL0.2980.2440.2180.1860.1920.1980.188
Table 4. Empirical sizes of the LMP and EL tests at the significance level of 0.05 for Model 2.
Table 4. Empirical sizes of the LMP and EL tests at the significance level of 0.05 for Model 2.
N
( α 1 α 2 )100300500700100020005000
r = 0.8
LMP(0.2, 0.1)0.2780.2300.1840.1920.2180.1220.114
EL0.3720.1960.2040.1960.1780.1780.172
LMP(0.4, 0.1)0.2720.2560.1920.2440.1600.1500.110
EL0.3220.2340.1980.1960.2220.1760.182
LMP(0.2, 0.2)0.2920.2300.2080.1700.1680.1660.098
EL0.3380.2140.2500.2320.2060.2200.216
LMP(0.4, 0.2)0.2900.2360.2260.2240.1540.1700.104
EL0.3580.2320.1940.1840.1720.2000.214
r = 0.9
LMP(0.2, 0.1)0.2200.2080.1800.1760.1400.0920.122
EL0.3360.2400.2040.2260.2000.1880.200
LMP(0.4, 0.1)0.2220.2240.1980.1760.1780.1480.124
EL0.3260.2180.2020.2180.2280.2060.216
LMP(0.2, 0.2)0.2400.1820.1960.1660.1660.1340.110
EL0.3480.2660.2060.2040.2280.1960.208
LMP(0.4, 0.2)0.2440.1980.1900.1740.1740.1460.102
EL0.3600.2340.2220.2300.2240.2440.180
Table 5. Empirical powers of the LMP and EL tests at the significance level of 0.1 for Model 3.
Table 5. Empirical powers of the LMP and EL tests at the significance level of 0.1 for Model 3.
N
σ α 1 , σ α 2 100300500700100020005000
LMP(0.3, 0.3)0.2120.4380.6380.7780.9041.0001.000
EL0.5140.4920.5320.6460.7100.8820.990
LMP(0.3, 0.4)0.3020.6120.8560.9520.9780.9961.000
EL0.5420.5960.6740.7900.8860.9741.000
LMP(0.36, 0.3)0.3020.5820.7700.9080.9701.0001.000
EL0.5750.5180.6060.6420.7560.8560.920
LMP(0.36, 0.4)0.3000.7320.9140.9520.9440.9981.000
EL0.6000.5700.6920.7580.8460.9501.000
LMP(0.4, 0.3)0.2860.6260.8540.9380.9780.9981.000
EL0.5400.5320.5800.6440.6900.7680.989
LMP(0.4, 0.4)0.3080.7420.9460.9720.9941.0001.000
EL0.5580.5900.6980.7680.7840.9401.000
LMP(0.46, 0.3)0.3400.7000.8920.9720.9881.0001.000
EL0.6180.5940.6720.7640.7920.9560.996
LMP(0.46, 0.4)0.3900.7900.9160.9520.9700.9961.000
EL0.6180.6140.6240.6760.7220.8951.000
Table 6. Empirical powers of the LMP and EL tests at the significance level of 0.1 for Model 4.
Table 6. Empirical powers of the LMP and EL tests at the significance level of 0.1 for Model 4.
N
b 1 , b 2 100300500700100020005000
LMP(0.9, 0.6)0.3220.7000.8920.9580.9901.0001.000
EL0.6080.5880.6660.7300.8460.9561.000
LMP(0.9, 0.9)0.2820.5760.8060.9160.9861.0001.000
EL0.5520.5520.5720.6700.7660.8700.982
LMP(0.9, 1.2)0.2720.4880.6820.8740.9561.0001.000
EL0.5320.4820.5480.6200.7100.8660.940
LMP(0.9, 1.5)0.2460.4400.5980.7860.9321.0001.000
EL0.5120.4520.5200.5780.6560.8200.890
Table 7. Empirical powers of the LMP and EL tests at the significance level of 0.1 for Model 5.
Table 7. Empirical powers of the LMP and EL tests at the significance level of 0.1 for Model 5.
N
σ α 1 , σ α 2 100300500700100020005000
r = 0.8
LMP(0.36, 0.3)0.2740.5300.7380.8860.9460.9921.000
EL0.5320.4180.5060.5440.5520.6620.884
LMP(0.36, 0.4)0.3120.6740.8420.9260.9720.9941.000
EL0.5960.5560.6080.6700.7460.9201.000
LMP(0.46, 0.3)0.3140.6500.7840.8640.8840.9480.988
EL0.5740.4420.4540.4320.4100.3780.854
LMP(0.46, 0.4)0.3440.7360.8180.8640.8800.9481.000
EL0.6520.5660.4920.5520.6080.8481.000
r = 0.9
LMP(0.36, 0.3)0.2880.5000.7240.8580.9500.9901.000
EL0.5540.4820.5220.5280.6020.6600.862
LMP(0.36, 0.4)0.2920.6260.7880.8720.8800.9670.996
EL0.5620.5080.3980.4920.4120.4301.000
LMP(0.46, 0.3)0.3340.6920.8680.9340.9580.9841.000
EL0.6120.5640.6280.6840.7840.9221.000
LMP(0.46, 0.4)0.3400.6660.8100.8540.9000.9620.996
EL0.6100.5620.5100.5500.6060.8501.000
Table 8. Empirical powers of the LMP and EL tests at the significance level of 0.05 for Model 3.
Table 8. Empirical powers of the LMP and EL tests at the significance level of 0.05 for Model 3.
N
σ α 1 , σ α 2 100300500700100020005000
LMP(0.3, 0.3)0.1420.3420.4880.6480.8300.9981.000
EL0.4080.3480.3860.4900.5940.8180.972
LMP(0.3, 0.4)0.2020.4940.7640.9060.9720.9961.000
EL0.4400.4560.5720.6880.7660.9260.997
LMP(0.36, 0.3)0.2020.4580.6620.8240.9380.9981.000
EL0.4400.4120.4380.5160.6520.7940.986
LMP(0.36, 0.4)0.2180.6380.8420.9300.9920.9981.000
EL0.4680.4780.5900.6460.7160.8881.000
LMP(0.4, 0.3)0.2080.5040.7600.9060.9680.9981.000
EL0.4240.4020.4700.5300.6200.7340.988
LMP(0.4, 0.4)0.2260.6680.9160.9540.9901.0001.000
EL0.4540.5140.5840.6400.6900.8881.000
LMP(0.46, 0.3)0.2560.5740.8540.9460.9841.0001.000
EL0.4880.4920.5720.6580.6980.8880.989
LMP(0.46, 0.4)0.3140.7280.9020.9440.9680.9951.000
EL0.5200.5280.5380.5940.5620.7421.000
Table 9. Empirical powers of the LMP and EL tests at the significance level of 0.05 for Model 4.
Table 9. Empirical powers of the LMP and EL tests at the significance level of 0.05 for Model 4.
N
b 1 , b 2 100300500700100020005000
LMP(0.9, 0.6)0.2260.5880.8260.9440.9841.0001.000
EL0.5000.4620.5340.6160.7520.8921.000
LMP(0.9, 0.9)0.1940.4480.6900.8440.9641.0001.000
EL0.4420.4340.4580.5400.6580.8000.954
LMP(0.9, 1.2)0.2140.3780.5620.7660.9161.0001.000
EL0.3820.3660.3900.5000.6100.8160.904
LMP(0.9, 1.5)0.1680.3140.4660.6780.8701.0001.000
EL0.3960.3260.4040.4680.5460.7660.864
Table 10. Empirical powers of the LMP and EL tests at the significance level of 0.05 for Model 5.
Table 10. Empirical powers of the LMP and EL tests at the significance level of 0.05 for Model 5.
N
σ α 1 , σ α 2 100300500700100020005000
r = 0.8
LMP(0.36, 0.3)0.1880.4560.6360.8240.9120.9921.000
EL0.4140.3500.3820.4380.4480.5740.758
LMP(0.36, 0.4)0.2480.5760.8020.9120.9680.9941.000
EL0.5060.4160.4720.5520.6140.8161.000
LMP(0.46, 0.3)0.2340.5560.7600.8520.8760.9480.988
EL0.4860.3400.3660.3540.3300.2720.708
LMP(0.46, 0.4)0.2780.6740.7980.8600.8800.9461.000
EL0.5460.4760.4000.4560.4760.6941.000
r = 0.9
LMP(0.36, 0.3)0.2340.3980.6460.7860.9200.9901.000
EL0.4560.3620.3780.4540.5080.5600.724
LMP(0.36, 0.4)0.2240.5600.7640.8640.8680.9840.996
EL0.4720.4100.3480.4040.3240.8121.000
LMP(0.46, 0.3)0.2480.6040.8340.9200.9500.9671.000
EL0.4800.4520.5140.6000.6640.3200.998
LMP(0.46, 0.4)0.2700.6080.7860.8460.8980.9620.996
EL0.5100.4620.4280.4380.4540.6841.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bi, L.; Wang, D.; Cheng, L.; Qi, D. Testing Coefficient Randomness in Multivariate Random Coefficient Autoregressive Models Based on Locally Most Powerful Test. Mathematics 2024, 12, 2455. https://doi.org/10.3390/math12162455

AMA Style

Bi L, Wang D, Cheng L, Qi D. Testing Coefficient Randomness in Multivariate Random Coefficient Autoregressive Models Based on Locally Most Powerful Test. Mathematics. 2024; 12(16):2455. https://doi.org/10.3390/math12162455

Chicago/Turabian Style

Bi, Li, Deqi Wang, Libo Cheng, and Dequan Qi. 2024. "Testing Coefficient Randomness in Multivariate Random Coefficient Autoregressive Models Based on Locally Most Powerful Test" Mathematics 12, no. 16: 2455. https://doi.org/10.3390/math12162455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop