Next Article in Journal
Existence of Solutions: Investigating Fredholm Integral Equations via a Fixed-Point Theorem
Previous Article in Journal
Distributed Charging Strategy of PEVs in SCS with Feeder Constraints Based on Generalized Nash Equilibria
Previous Article in Special Issue
Portmanteau Test for ARCH-Type Models by Using High-Frequency Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Randomness Test of Thinning Parameters for the NBRCINAR(1) Process

School of Mathematics and Computer Science, Jilin Normal University, Siping 136000, China
Axioms 2024, 13(4), 260; https://doi.org/10.3390/axioms13040260
Submission received: 20 February 2024 / Revised: 1 April 2024 / Accepted: 11 April 2024 / Published: 14 April 2024
(This article belongs to the Special Issue Time Series Analysis: Research on Data Modeling Methods)

Abstract

:
Non-negative integer-valued time series are usually encountered in practice, and a variety of integer-valued autoregressive processes based on various thinning operators are commonly used to model these count data with temporal dependence. In this paper, we consider a first-order integer-valued autoregressive process constructed by the negative binomial thinning operator with random coefficients, to address the problem of constant thinning parameters which might not always accurately represent real-world settings because of numerous external and internal causes. We estimate the model parameters of interest by the two-step conditional least squares method, obtain the asymptotic behaviors of the estimators, and furthermore devise a technique to test the constancy of the thinning parameters, which is essential for determining whether or not the proposed model should consider the parameters’ randomness. The effectiveness and dependability of the suggested approach are illustrated by a series of thorough simulation studies. Finally, two real-world data analysis examples reveal that the suggested approach is very useful and flexible for applications.

1. Introduction

Integer-valued time series are very widely present in various fields such as economics, sociology, clinical medicine, genetics, finance and meteorology, among others, in both production and practical aspects of life. These types of data often exhibit certain dependencies and structural characteristics. When the values of integer-valued data are sufficiently large, traditional continuous time series models can provide good approximations for them. However, if the values of integer-valued data are small, it is necessary to develop more effective integer-valued time series models for fitting and forecasting. Therefore, since the 1970s, an increasing number of scholars have started to focus on the statistical analysis and application of integer-valued time series, resulting in a large body of research findings.
The Markov chain is one of the earliest methods used to model integer-valued time series. Cox and Miller [1] provides a detailed introduction to this approach. However, Markov chains often suffer from over-parameterization problems and have inherent limitations in characterizing the dependency structures of data. Jacobs and Lewis [2] constructs a class of discrete autoregressive moving average (ARMA) time series with dependency structures similar to the continuous ARMA models. However, it is challenging to find real data that conforms to the corresponding characteristics of the sample trajectory of such models in practice. To overcome these challenges, Al-Osh and Alzaid [3] proposes the first-order integer-valued autoregressive (INAR(1)) process that is expressed as
X t = ϕ X t 1 + ϵ t , t = 1 , 2 , ,
in which the so-called binomial thinning operator “∘” is put forward by Steutel and van Harn [4] and represent
ϕ X = i = 1 X B i ,
where the thinning parameter ϕ [ 0 , 1 ) , X is a non-negative integer-valued random variable (count random variable), and { B i , i = 1 , 2 , } , independent of X, is a sequence consisting of independent and identically distributed (i.i.d.) Bernoulli random variables with mean ϕ .
As one of the primary methods for analyzing integer-valued time series, the INAR(1) process based on the binomial thinning operator has been widely investigated because of its favorable statistical properties and extensive applications. People can refer to [5,6,7,8] and the references therein for detailed and exhaustive surveys about these models. Based on the definition (2), it is known that ϕ X will not exceed X, such that it is appropriate for modelling the number of random events, which may only survive or vanish during an observation period, i.e., the observed units can contribute to the overall sum with 0 or 1. However, the observed units could also be more correlated and dependent, and some of them can generate new units. For example, one criminal act in a certain district of a town may provoke one or more other crimes; one COVID-19 patient could infect other people (it was reported that one COVID-19 patient infected 44 people, and there was even a “super spreader” who caused 141 confirmed cases). To address this issue, Ristić et al. [9] introduces the negative binomial thinning operator which is defined from a counting series of i.i.d. Geometric distributed random variable. Furthermore, the INAR(1) processes with the negative binomial thinning operator are also proposed. See also [10,11,12,13,14] for the relevant study.
On the other hand, due to the influences of various external and internal causes, the thinning parameter ϕ is often not a constant but varies over time. For example, the number of patients after a while may be affected by the environmental conditions, the medical supply, the policies and the past observations. With this concern in mind, Zheng et al. [15,16] propose the first-order and pth-order random coefficient integer-valued autoregressive (RCINAR) process by means of binomial thinning operator, respectively, in which a sequences of i.i.d. random variables takes the place of the fixed thinning parameter. Zhang et al. [17] considers the empirical likelihood method for the estimation of such a model. Recently, Refs. [18,19,20,21] generalize the threshold autoregressive process, the process with dependent counting series and the binomial autoregressive process to the cases with random coefficients correspondingly. Yu and Tao [22] gives a effectively consistent model selection procedure for the RCINAR process, by using the estimation equation for conditional least squares method.
However, to our knowledge, there are few papers concerning RCINAR process based on the negative binomial thinning operator, among which, Yu et al. [23] introduces the observation-driven RCINAR process based on negative binomial thinning operator, where the thinning parameters depend on the previous observations, and conditional least squares method and empirical likelihood method are used to estimate the unknown parameters. As a continuation of the related investigation, when analyzing real data in practice, a very crucial question is to test whether the thinning parameters are random variables, so that we can choose appropriate models. For this important topic, Zhao and Hu [24] discusses the problem for testing the randomness of the thinning parameters in the INAR(1) process built through a binomial thinning operator by the two-step conditional least squares method and [25] improves their results by developing a so-called locally most powerful-type test method based on the likelihood function of samples. In addition, Lu and Wang [26] proposes a new test approach based on the empirical likelihood method, noting that the methods suggested by Refs. [25,26] heavily rely on the distribution of the innovation term ϵ t , which is usually difficult to determine in practical application. The goal of this paper is to examine the randomness test of thinning parameters in the RCINAR(1) process based on negative binomial thinning operator, using the two-step conditional least squares method similar with that in [24].
The rest of this paper is organized as follows. In Section 2, we introduce the considered model, as well as some basic probabilistic and statistical properties. In Section 3, we estimate the parameters of interest by the two-step conditional least squares method proposed by Nicholls and Quinn [27] and obtain the asymptotic results of the estimators. In Section 4, we test the constancy of the thinning parameters of the concerned model. In Section 5, a series of numerical simulations are conducted to assess the performance of the suggested method. In Section 6, our suggested method is applied to two sets of real-world data. In Section 7, we discuss some possibility for expansions of this paper, mainly including the limitations of the suggested method and several potential future research. Section 8 summarizes this paper.

2. The Model and Some Properties

In this paper, we consider the first-order random coefficient integer-valued autoregressive process constructed through a negative binomial thinning operator, which is called NBRCINAR(1) for short. This process is defined by the recursive equation as follows:
X t = ϕ t X t 1 + ϵ t , t = 1 , 2 , ,
in which the negative binomial thinning operator * represents
ϕ t X t 1 = i = 1 X t 1 W i ( t ) ,
where { W i ( t ) , i = 1 , 2 , } is a sequence of i.i.d. Geometric random variables conditional on ϕ t with the common probability mass function
P ( W i ( t ) = k | ϕ t ) = ϕ t k ( 1 + ϕ t ) k + 1 , k = 0 , 1 , .
In addition, it is also assumed that
  • (A1) { ϕ t , t = 1 , 2 , } is an i.i.d. non-negative sequence with finite mean denoted by ϕ = E ( ϕ t ) and finite variance deneoted by σ 1 2 = Var ( ϕ t ) , respectively.
  • (A2) { ϵ t , t = 1 , 2 , } is an i.i.d. non-negative integer-valued sequence, and denote the common mean and variance by λ = E ( ϵ t ) and σ 2 2 = Var ( ϵ t ) , respectively.
  • (A3) X 0 , { ϕ t , t = 1 , 2 , } and { ϵ t , t = 1 , 2 , } are independent.
  • (A4) for any fixed t and s ( t s ), the innovation ϵ t and the counting series { W i ( t l ) , i = 1 , 2 , } ( l 0 ) are independent. Moreover, { W i ( t ) , i = 1 , 2 , } and { W j ( s ) , j = 1 , 2 , } are independent.
Remark 1.
When ϕ t = ϕ , t = 1 , 2 , , model (3) will reduce to the first-order integer-valued autoregressive process built by negative binomial thinning operator with constant thinning parameter, which is firstly proposed by Ristić et al. [9]. The authors abbreviate their model as NGINAR process. However, it should be noted that the innovation ϵ t in their model is supposed to follow a mixed geometric distribution, so that the process { X t , t = 0 , 1 , } itself is stationary and has geometric marginals. Afterward this restriction has been relaxed, and the model has been extended to more general cases. We refer to Gomes and Canto [28] for a first-order generalized random coefficient integer-valued autoregressive (GRCINAR(1)) process, which is constructed using a generalized thinning operator that includes the binomial thinning operator and the negative binomial thinning operator, as well as some other thinning operators. Actually, our model can be taken as a special case of GRCINAR(1) process. However, Gomes and Canto [28] mainly considers the estimation of the model parameters, while we further focus on the randomness test of the thinning parameter and also discuss the forecasting problem for model (3) in this paper.
According to Gomes and Canto [28], it is easy to know the following important distributional properties of the NBRCINAR(1) process.
Proposition 1.
The NBRCINAR(1) process { X t , t = 0 , 1 , } defined by (3) is a Markov chain with state space { 0 , 1 , 2 , } and the transition probabilities are given by
P i j = P ( X t = j | X t 1 = i ) = I ( i = 0 ) f ϵ ( j ) + ( 1 p ) I ( i 0 ) k = 0 j f ϵ ( j k ) i + k 1 k E [ ϕ 1 k / ( 1 + ϕ 1 ) k + i ] ,
where f ϵ denotes the probability mass function of ϵ t .
Proposition 2.
For any t = 1 , 2 , , it holds that
(1) E ( X t | X t 1 ) = ϕ X t 1 + λ .
(2) Var ( X t | X t 1 ) = σ 1 2 X t 1 2 + ϕ ( 1 + ϕ ) + σ 1 2 X t 1 + σ 2 2 .
Proposition 3.
If we have 0 < E ( ϕ t ) 2 = ϕ 2 + σ 1 2 < 1 , then there exists a unique weakly stationary non-negative integer-valued process { X t , t = 0 , 1 , } satisfying (3).
Remark 2.
At the end of this section, it is worth noting that the process stated in Proposition 3 can also be proved to be strict stationary and ergodic, by the same methods used in [29,30]. The details are omitted here for brevity.

3. Parameter Estimation and Asymptotic Properties of the Estimators

In this section, we discuss the estimation of parameters in the NBRCINAR(1) process by the two-step conditional least squares method. For convenience, suppose that ( X 0 , X 1 , , X n ) is a series of observations satisfying (3). The unknown parameters we are interested in include η = ( ϕ , λ ) T and θ = ( σ 1 2 , σ 2 2 ) T and denote their true values by η 0 = ( ϕ 0 , λ 0 ) T and θ 0 = ( σ 1 , 0 2 , σ 2 , 0 2 ) T , respectively.
Throughout the rest of the paper, we assume that the following two conditions holds
(C1) { X t , t = 0 , 1 , } is a strictly stationary and ergodic process.
(C2) E ( X t ) 8 < + , t = 0 , 1 , .
In the first step, we focus on η = ( ϕ , λ ) T . Let
S ( η ) = t = 1 n [ X t E ( X t | X t 1 ) ] 2 = t = 1 n ( X t ϕ X t 1 λ ) 2
be the conditional least squares (CLS) criterion function. Then, the CLS estimator of the parameter η is given by
η ^ = arg min η S ( η ) .
Setting S ( η ) η = 0 , we obtain
η ^ = 1 n t = 1 n Y t Y t T 1 1 n t = 1 n X t Y t .
Noting that Y t = ( X t 1 , 1 ) T , the above equation can be simplified to
ϕ ^ = n t = 1 n X t X t 1 t = 1 n X t t = 1 n X t 1 n t = 1 n X t 1 2 t = 1 n X t 1 2 , λ ^ = 1 n t = 1 n X t ϕ ^ t = 1 n X t 1 .
To conduct the randomness test for the thinning parameter ϕ t , we also need to estimate the parameter θ = ( σ 1 2 , σ 2 2 ) T and establish the asymptotic behaviors of the estimators in the second step. For this purpose, we refer to Nicholls and Quinn [27] and Hwang and Basaea [31], as well as Zhu and Wang [32], and adopt the two-step least squares method that has been widely used for time series models with random coefficients. Denote
V t = [ X t E ( X t | X t 1 ) ] 2 = [ X t ϕ X t 1 λ ] 2 ,
then it is easy to verify from Proposition 2 that
E ( V t | X t 1 ) = Var ( X t | X t 1 ) = σ 1 2 X t 1 2 + [ ϕ ( 1 + ϕ ) + σ 1 2 ] X t 1 + σ 2 2 .
Define δ = ( σ 1 2 , ϕ ( 1 + ϕ ) + σ 1 2 , σ 2 2 ) T , then the CLS estimator of the parameter δ can be realized through minimizing the sum
Q ( δ ) = t = 1 n [ V t E ( V t | X t 1 ) ] 2 = t = 1 n { V t σ 1 2 X t 1 2 + [ ϕ ( 1 + ϕ ) + σ 1 2 ] X t 1 + σ 2 2 } 2 = t = 1 n { V t Z t T δ } 2 ,
in which Z t = ( X t 1 2 , X t 1 , 1 ) T . Thus, we obtain
δ ^ = 1 n t = 1 n Z t Z t T 1 1 n t = 1 n V t Z t .
Noting that δ ^ is a function of η , we rewrite it as δ ^ ( η ) . Substituting δ ^ into the expression yields δ ^ ( η ^ ) , then we can obtain the estimator of the parameter θ as follows:
θ ^ 1 = σ ^ 1 2 = δ ^ 1 ( η ^ ) , θ ^ 2 = σ ^ 2 2 = δ ^ 3 ( η ^ ) ,
in which θ ^ 1 and θ ^ 2 composes θ ^ , while δ ^ 1 ( η ^ ) and δ ^ 3 ( η ^ ) are the first element and third element of δ ^ ( η ^ ) , respectively.
The following theorem establishes the limit asymptotic normality of the estimators.
Theorem 1.
If the assumptions (C1) and (C2) hold, then we have
( n ( η ^ η 0 ) T , n ( δ ^ ( η ^ ) δ 0 ) T ) T L N ( 0 , Ω ) , n + ,
in which the covariance matrix is
Ω = ( ω i j ) 5 × 5 = V 1 Φ V 1 V 1 Π U 1 U 1 Π T V 1 U 1 Δ U 1 ,
η 0 = ( ϕ 0 , λ 0 ) T and δ 0 = ( σ 1 , 0 2 , ϕ 0 ( 1 + ϕ 0 ) + σ 1 , 0 2 , σ 2 , 0 2 ) T represent the true values of the parameters η and δ , respectively, and
V = E ( Y t Y t T ) , Φ = E [ ( X t Y t T η 0 ) 2 Y t Y t T ] , U = E ( Z t Z t T ) ,
Δ = E [ ( V t Z t T δ 0 ) 2 Z t Z t T ] , Π = E [ ( V t Z t T δ 0 ) ( X t Y t T η 0 ) Z t Y t T ] .
Proof. 
Noting that
η ^ = 1 n t = 1 n Y t Y t T 1 1 n t = 1 n X t Y t ,
we can obtain
n ( η ^ η 0 ) = 1 n t = 1 n Y t Y t T 1 1 n t = 1 n Y t ( X t Y t T η 0 ) .
Denote F t = σ ( X t , X t 1 , , X 0 ) , then it is easy to check that
E [ Y t ( X t Y t T η 0 ) | F t 1 ] = 0 ,
hence, for any c = ( c 1 , c 2 ) T ( 0 , 0 ) T , it follows that
E [ c T Y t ( X t Y t T η 0 ) | F t 1 ] = 0 .
Meanwhile, by the assumptions (C1) and (C2), we have
E [ c T Y t ( X t Y t T η 0 ) ] 2 = c T Φ c = E [ ( c 1 X t 1 + c 2 ) 2 ( X t ϕ 0 X t 1 + λ 0 ) 2 ] < + ,
in which
Φ = E [ ( X t Y t T η 0 ) 2 Y t Y t T ] .
Therefore, applying the central limit theorem for martingales (see Billingsley [33] for example) leads to
1 n t = 1 n c T Y t ( X t Y t T η 0 ) L N ( 0 , c T Φ c ) , n + .
Thus, according to the Cramér–Wold device, it holds that
1 n t = 1 n Y t ( X t Y t T η 0 ) L N ( 0 , Φ ) , n + .
Moreover, from the strict stationarity and ergodicity of the { X t , t = 0 , 1 , } , we have
1 n t = 1 n Y t Y t T a . s . E ( Y t Y t T ) = V , n + .
Hence, it follows that
n ( η ^ η 0 ) L N ( 0 , V 1 Φ V 1 ) .
Similarly, we can prove
( n ( η ^ η 0 ) T , n ( δ ^ ( η 0 ) δ 0 ) T ) T L N ( 0 , Ω ) , n + .
On the other hand, because
E [ Z t X t 1 ( X t ϕ 0 X t 1 λ 0 ) ] = 0 ,
then we have
1 n t = 1 n Z t X t 1 ( X t ϕ 0 X t 1 λ 0 ) a . s . 0 , n + ,
which together with (12) lead to
1 n t = 1 n Z t X t 1 ( X t ϕ 0 X t 1 λ 0 ) ( ϕ 0 ϕ ^ ) = n ( ϕ 0 ϕ ^ ) 1 n t = 1 n Z t X t 1 ( X t ϕ 0 X t 1 λ 0 ) = o p ( 1 ) , n + .
Similarly, it can be proven that
1 n t = 1 n Z t X t 1 ( X t ϕ 0 X t 1 λ 0 ) ( λ 0 λ ^ ) = o p ( 1 ) , n + .
Meanwhile, it is easy to know from (12) that
n ( ϕ 0 ϕ ^ ) 2 = n ( ϕ 0 ϕ ^ ) n ( ϕ 0 ϕ ^ ) n = o p ( 1 ) , n + ,
which together with
1 n t = 1 n Z t X t 1 2 a . s . E ( Z t X t 1 2 ) , n + ,
yield
1 n t = 1 n Z t X t 1 2 ( ϕ 0 ϕ ^ ) 2 = n ( ϕ 0 ϕ ^ ) 2 1 n t = 1 n Z t X t 1 2 = o p ( 1 ) , n + .
By the same method, we have
1 n t = 1 n Z t X t 1 2 ( ϕ 0 ϕ ^ ) ( λ 0 λ ^ ) = o p ( 1 ) , n +
and
1 n t = 1 n Z t X t 1 2 ( λ 0 λ ^ ) 2 = o p ( 1 ) , n + .
Noting
1 n t = 1 n Z t Z t T a . s . E ( Z t Z t T ) , n + ,
and combining (14)–(18), it follows that
n ( δ ^ ( η ^ ) δ ^ ( η 0 ) ) = 1 n t = 1 n Z t Z t T 1 1 n t = 1 n Z t ( V t ( η ^ ) V t ( η 0 ) ) = 1 n t = 1 n Z t Z t T 1 1 n t = 1 n Z t [ ( X t ϕ ^ X t 1 λ ^ ) 2 ( X t ϕ 0 X t 1 λ 0 ) 2 ] = 1 n t = 1 n Z t Z t T 1 2 n t = 1 n Z t X t 1 ( X t ϕ 0 X t 1 λ 0 ) ( ϕ 0 ϕ ^ ) + 2 n t = 1 n Z t X t 1 ( X t ϕ 0 X t 1 λ 0 ) ( λ 0 λ ^ ) + 1 n t = 1 n Z t X t 1 2 ( ϕ 0 ϕ ^ ) 2 + 2 n t = 1 n Z t X t 1 2 ( ϕ 0 ϕ ^ ) ( λ 0 λ ^ ) + 1 n t = 1 n Z t X t 1 2 ( λ 0 λ ^ ) 2 = o p ( 1 ) , n + .
Finally, we conclude that (11) holds and consequently the theorem follows from
n ( δ ^ ( η ^ ) δ 0 ) = n ( δ ^ ( η ^ ) δ ^ ( η 0 ) ) + n ( δ ^ ( η 0 ) δ 0 ) ,
Equation (13) and the Slutsky theorem. □
Furthermore, by Proposition 6.4.3 in Brockwell and Davis [34], we can derive the following result smoothly.
Theorem 2.
If the assumptions (C1) and (C2) hold, then we have
( n ( η ^ η 0 ) T , n ( θ ^ θ 0 ) T ) T L N ( 0 , Γ Φ Γ T ) , n + ,
in which θ 0 = ( σ 1 , 0 2 , σ 2 , 0 2 ) T is the true value of θ = ( σ 1 2 , σ 2 2 ) T , and
Γ = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 .

4. Testing Problem

For the NBRCINAR(1) process, it is of great importance to test the hypothesis that the thinning parameter is constant across the time, because no stochastic time variation for the thinning parameter will make (3) become the INAR(1) process that is discussed in Ristić et al. [9]. With this end in view, we need to consider the following testing problem:
H 0 : σ 1 2 = 0 vs . H 1 : σ 1 2 > 0 .
Let e = ( 0 , 0 , 1 , 0 , 0 ) T , based on Theorem 1 or Theorem 2, we have
n ( σ ^ 1 2 σ 1 , 0 2 ) L N ( 0 , e T Ω e ) , n + .
So, we need estimate Ω consistently to obtain the test statistic. In what follows, we consider
V ^ = 1 n t = 1 n Y t Y t T , U ^ = 1 n t = 1 n Z t Z t T ,
Φ ^ = 1 n t = 1 n ( X t Y t T η ^ ) 2 Y t Y t T ,
Δ ^ = 1 n t = 1 n ( V t ( η ^ ) Z t T δ ^ ( η ^ ) ) 2 Z t Z t T ,
Π ^ = 1 n t = 1 n ( V t ( η ^ ) Z t T δ ^ ( η ^ ) ) ( X t Y t T η ^ ) Z t Y t T .
By the strict stationarity and ergodicity of { X t , t = 0 , 1 , } , it is easy to verify that
V ^ a . s . V , U ^ a . s . U , n + .
As for Φ ^ , Δ ^ and Π ^ , the asymptotic properties are given by the following theorem.
Theorem 3.
If the assumptions (C1) and (C2), then we have
Φ ^ P Φ , Δ ^ P Δ , Π ^ P Π , n + .
Proof. 
First, it is easy to calculate that
Φ ^ Φ = 1 n t = 1 n [ ( X t Y t T η ^ ) 2 ( X t Y t T η 0 ) 2 ] Y t Y t T = 1 n t = 1 n [ ( Y t T η ^ ) 2 ( Y t T η 0 ) 2 ] Y t Y t T + 1 n t = 1 n 2 ( X t Y t T η 0 X t Y t T η ^ ) Y t Y t T .
Because
n ( η ^ η 0 ) T L N ( 0 , V 1 Φ V 1 ) , n + ,
we have
η ^ η 0 = o p ( 1 ) , η ^ + η 0 = O p ( 1 ) , n + .
Furthermore, E ( X t 8 ) < + and ergodic theorem imply that
1 n t = 1 n | | Z t | | F 4 = O p ( 1 ) , n + ,
in which | | · | | F stands for the Frobenius norm. Then, it can be concluded that
1 n t = 1 n [ ( Y t T η ^ ) 2 ( Y t T η 0 ) 2 ] Y t Y t T F = 1 n t = 1 n ( η ^ η 0 ) Y t Y t Y t T Y t T ( η ^ + η 0 ) F | | η ^ η 0 | | 1 n t = 1 n | | Y t | | F 4 | | η ^ + η 0 | | P 0 , n + .
Similarly, we can obtain
1 n t = 1 n 2 ( X t Y t T η 0 X t Y t T η ^ ) Y t Y t T F P 0 , n + .
Therefore, we obtain
Φ ^ P Φ , n + .
By the same arguments, we have
Δ ^ P Δ , Π ^ P Π , n + .
The proof is thus completed. □
Finally, according to (21) and Theorem 3, it holds that
n ( σ ^ 1 2 σ 1 , 0 2 ) e T Ω ^ e L N ( 0 , 1 ) , n + .
Based on this result, we construct the rejection region for significance level α of testing problem (20) as
W n , α = n σ ^ 1 2 e T Ω ^ e u τ ,
where u α is the upper τ quantile of N ( 0 , 1 ) .

5. Simulation Studies

In this section, we present some extensive simulation studies to demonstrate the performances of the estimates and the test discussed in Section 3 and Section 4. To achieve this goal, we consider the following NBRCINAR(1) process:
X t = ϕ t X t 1 + ϵ t , t = 1 , 2 , ,
in which { ϵ t , t = 1 , 2 , } is supposed to be an i.i.d. Poisson sequence with mean λ , and { ϕ t , t = 1 , 2 , } is assumed to be an i.i.d. sequence of Beta random variables with probability density function
f ( x ) = Γ ( α + β ) Γ ( α ) Γ ( β ) x α 1 ( 1 x ) β 1 , 0 < x < 1 , 0 , otherwise .
It is easy to calculate that
ϕ = E ( ϕ t ) = α α + β , σ 1 2 = Var ( ϕ t ) = α β ( α + β ) 2 ( α + β + 1 ) ,
then for any α > 0 and β > 0 , it obviously holds that
0 < E ( ϕ t ) 2 = ϕ 2 + σ 1 2 = α ( α + 1 ) ( α + β ) ( α + β + 1 ) < 1 ,
which implies that the condition (C1) can be satisfied. On the other hand, define
X t ( n ) = 0 , n < 0 , ϵ t , n = 0 , ϕ t X t 1 ( n 1 ) + ϵ t , n > 0 ,
where t Z = { 0 , ± 1 , ± 2 , , } , and
ϕ t X t 1 ( n 1 ) = i = 1 X t 1 ( n 1 ) W i ( t ) ,
then from the proof of Theorem 2.1 in Gomes and Canto [28], the unique strict stationary and ergodic process { X t , t = 0 , 1 , } that satisfies (3) can be obtained as
X t ( n ) L 2 X t , t = 0 , 1 , ,
where L 2 represents convergence in the mean square. Because
E ( X t ( n ) ) = E ( ϕ t X t 1 ( n 1 ) ) + E ( ϵ t ) = ϕ E ( X t 1 ( n 1 ) ) + λ = = ϕ n E ( X t n ( 0 ) ) + ϕ n 1 λ + + ϕ λ + λ = λ 1 ϕ n + 1 1 ϕ λ 1 ϕ , n + ,
it follows that
E ( X t ) = λ 1 ϕ < + .
Similarly, we can obtain
E ( X t ( n ) ) 2 = E [ ( ϕ t X t 1 ( n 1 ) ) 2 ] + E ( ϵ t 2 ) + 2 E [ ϵ t ( ϕ t X t 1 ( n 1 ) ) ] = ( ϕ 2 + σ 1 2 ) E ( X t 1 ( n 1 ) ) 2 + ( σ 1 2 + ϕ 2 + ϕ + 2 λ ϕ ) E ( X t 1 ( n 1 ) ) + λ 2 + λ ,
repeating this procedure n times, and noting the facts
E ( X t n ( 0 ) ) 2 = E ( ϵ t ) 2 < + , E ( X t n ( 0 ) ) = E ( ϵ t ) < + , 0 < ϕ 2 + σ 1 2 < 1 ,
it holds that
E ( X t ( n ) ) 2 < + ,
which leads to
E ( X t ) 2 < + .
By the same method with some tedious calculations, we can verify that the condition (C2) is also satisfied, i.e.,
E ( X t ) 8 < + , t = 0 , 1 , .
For the simulations, we carry out all the calculations under the following 6 scenarios with X 0 = 1 :
(1) α = 0.1 , β = 0.1 , λ = 1 ; (2) α = 0.1 , β = 0.1 , λ = 2 ;
(3) α = 0.1 , β = 0.15 , λ = 1 ; (4) α = 0.15 , β = 0.1 , λ = 2 ;
(5) α = 0.15 , β = 0.1 , λ = 1 ; (6) α = 0.1 , β = 0.15 , λ = 2 .

5.1. Estimate

For the generated samples with n = 100 , 300 , 500 , 1000 , 2000 and 5000, the effectiveness of two-step CLS estimators is evaluated by adopting the empirical bias (BIAS) and the mean squared errors (MSE), which are defined for parameter ϑ by
BIAS = 1 m k = 1 m ϑ ^ k ϑ 0 , MSE = 1 m k = 1 m ( ϑ ^ k ϑ 0 ) 2 ,
respectively, where m is the replication times ( m = 1000 in this paper), ϑ ^ k represents the estimator of the parameter ϑ at the kth replication, and ϑ 0 denotes the true value of ϑ .
A summary of the simulation results is given in Table 1, Table 2 and Table 3. We can see that as the sample size increases, the values of BIAS and MSE gradually decrease, implying that the estimates are consistent for all the parameters. However, the estimates of ϕ and λ converge to their true values vary fast, while for σ 1 2 , considered in the second step of CLS method, the convergence seems to be a little slow, and a large sample size is necessary to achieve good estimation results. One of the main reasons for this problem is that values outside the allowed range for σ 1 2 might easily be obtained for small sample sizes, i.e., there could emerge the cases that σ ^ 1 2 < 0 . Therefore, following Karlsen and Tjøstheim [35], we adjust the estimates in a somewhat ad hoc manner by taking into account the restrictions on σ 1 2 as follows: if σ ^ 1 2 < 0 , replace σ ^ 1 2 by 0. Moreover, it shall be noted that if more appropriate adjustment methods are adopted, we may have better results.

5.2. Hypothesis Test

Now, we turn to the issue on testing the randomness of the thinning parameter in the NBRCINAR(1) process. For the empirical sizes, we consider the following model
X t = ϕ X t 1 + ϵ t , t = 1 , 2 , .
We assume ϵ t P ( λ ) , take ϕ = 0.5 , 0.4 , 0.6 , λ = 1 , 2 and generate samples from the Poisson NBINAR(1) process (25) with various combinations of the parameters, then compute the test statistic for n = 100 , 500 , 1000 , 2000 , 5000 and 10,000, and carry out 1000 replications in each case to calculate the observed percentage of rejecting the null hypothesis, at significance level τ = 0.10 and 0.05 , respectively. Regarding the empirical power, the process is assumed to be (24) with parameters in scenarios 1-6 under the alternative hypothesis, and the rejection region is conducted based on the discussion in Section 4.
Table 4 and Table 5 present the empirical power and size of the tests under different parameter scenarios. It can be observed that as the sample size increases, the empirical power gradually tends towards 1, which indicates that when the thinning parameters of the true model are random variables, they can be effectively identified. On the other hand, although the empirical size of the tests remains small, there is still some gap from the desired significance level. One important reason for this phenomenon is that when the thinning parameters are constant (i.e., σ 1 2 = 0 ), the estimator σ ^ 1 2 and the test statistic n σ ^ 1 2 e T Ω ^ e may be a lit far from the property of asymptotic normality sometimes, since the true value of σ 1 2 falls on the boundary of the parameter space at this moment. In Figure 1, we provide the QQ plot of the estimator σ ^ 1 2 under different parameter scenarios for n = 1000 , which also confirms this conclusion, indicating that it is very necessary to conduct further research to address this issue and improve our method. Meanwhile, it should be noted that the larger sample size can result in better performance.

6. Real Data Analysis

In this section, we would like to show how our suggested method can be used to the real life situations. To this aim, we focus on two real-world count data sets.

6.1. The Asymptomatic COVID-19 Cases in China

This data consist of the daily numbers of new asymptomatic COVID-19 cases in China, totally 534 observations (from 31 March to 15 September 2021) reported by the National Health Commission of the PRC. Figure 2 gives the sample path, the ACF and PACF of the series, from which it is reasonable for us to assume that these data come from an INAR(1) process.
We mainly apply the following four models to fit and analyze the data, among which the latter two models are considered here in order to show the superiority of the models constructed by the negative binomial thinning operator.
(1) NBRCINAR(1) process: the first-order random coefficient integer-valued autoregressive process constructed by the negative binomial thinning operator, i.e.,
X t = ϕ t X t 1 + ϵ t , t = 1 , 2 , ,
in which ϕ t Beta ( α , β ) .
(2) NBINAR(1) process: the first-order integer-valued autoregressive process with constant coefficient constructed by the negative binomial thinning operator, i.e.,
X t = ϕ X t 1 + ϵ t , t = 1 , 2 , .
(3) BRCINAR(1) process: the first-order random coefficient integer-valued autoregressive process based on the binomial thinning operator, i.e.,
X t = ϕ t X t 1 + ϵ t , t = 1 , 2 , ,
in which ϕ t Beta ( α , β ) .
(4) BINAR(1) process: the first-order integer-valued autoregressive process with constant coefficient based on the binomial thinning operator, i.e.,
X t = ϕ X t 1 + ϵ t , t = 1 , 2 , .
By using the method described in Section 4 to test the randomness of the thinning parameter, we obtain that the p-value is 0.0887, which suggests that we would reject the null hypothesis σ 1 2 = 0 in favor of the alternative hypothesis σ 1 2 > 0 at the significance level of 10 % , and thus, we should apply the NBRCINAR(1) process to fit the data.
To make further comparisons between the aforementioned four models, we split the data set into two parts: the first 529 observations from 31 March to 10 September are considered as a training sample to estimate the parameters, retaining the last 5 observations from 11 September to 15 September as a forecasting evaluation sample to perform an out-of-sample experiment. When estimating the parameters, the two-step CLS method makes it not necessary to specify the distribution of the innovation ϵ t . Meanwhile, to improve the estimation performance, we use the block bootstrap method for the dependent time series proposed by Künsch [36] to derive the model parameters (1000 replications) and then obtain the CLS estimates by averaging the sample bootstrap estimates. Moreover, the forecasting performance of the estimated INAR models is assessed by the forecast mean absolute error (FMAE) statistics of h-step-ahead forecasts, which are defined by
FMAE = 1 H h = 1 H | X ^ n + h X n + h | ,
where H = 5 , n = 529 , and X ^ n + h is the coherent prediction of X n + h .
As for time series analysis, conditional expectation (CE for short) is the most common technique to construct forecasts, since it can lead to minimum mean squared error. However, for all the four models mentioned above, we have
E ( X t + h | X t ) = ϕ h X t + 1 ϕ h 1 ϕ λ , t = 1 , 2 , ,
which implies that we can not distinguish them because of the same CLS estimates of ϕ and λ . On the other hand, conditional expectation usually fails to preserve the integer-valued nature in making forecasts for count data time series. To cater these problems, Bisaglia and Gerolimetto [37] proposes a new approach based on the autoregressive structure of the INAR model by means of bootstrap techniques. We will also employ this method for further analysis. Taking the NBRCINAR(1) process as an example, the corresponding algorithm steps of this method are given as follows:
Step 1. Estimate the unknown model parameters of interest α and β by two-step CLS method to obtain α ^ and β ^ .
Step 2. Compute the residuals as
ϵ ^ t = X t ϕ t X t 1 , t = 2 , , n ,
in which { ϕ t , t = 1 , , n } is a sequence of i.i.d. random variables drawn from Beta( α ^ , β ^ ).
Step 3. Define the modified residuals ϵ ˜ t as
ϵ ˜ t = ϵ ^ t , if ϵ ^ t 0 , 0 , if ϵ ^ t < 0 , t = 2 , , n ,
and then fit the empirical distribution F ^ ϵ of ϵ ˜ t .
Step 4. Obtain the bootstrapped series X t b for b = 1 , , B by
X t b = ϕ t b X t 1 + ϵ t b , t = 2 , , n ,
where { ϕ t b , t = 1 , , n } and { ϵ t b , t = 1 , , n } are i.i.d. random samples from Beta( α ^ , β ^ ) and F ^ ϵ , respectively.
Step 5. Based on the bootstrapped series { X 1 b , , X n b } , derive the estimators α ^ b and β ^ b of α and β as those in Step 1, respectively.
Step 6. Calculate the forecasts as
X n + h b = ϕ n + h b X n + h 1 b + ϵ n + h b , h = 1 , , H ,
where H denotes the largest prediction horizon, X n b = X n , while { ϕ n + h b , h = 1 , , H } and { ϵ n + h b , h = 1 , , H } are i.i.d. random samples drawn from Beta( α ^ b , β ^ b ) and F ^ ϵ , respectively.
Step 7. Obtain X ^ n + h , the point forecast of X n + h by taking the median of the replicates { X n + h 1 , , X n + h B } for h = 1 , , H .
Table 6 presents the results of parameter estimation for the four different models we consider in this paper, i.e., the NBRCINAR(1) process (26), the NBINAR(1) process (27), the BRCINAR(1) process (28) and the BINAR(1) process (29). After applying the two-step CLS method, it can be seen that ϕ and λ in all the four models have the same estimates. In addition, the estimates of α and β in NBRCINAR(1) process and BRCINAR(1) process are also equal, respectively, which makes us dedicate ourselves to developing other estimation methods for these models. Furthermore, when using the model-based INAR bootstrap (MBB for short) technique to construct the forecasts, we set B = 501 to achieve our goal for model selection, i.e., predict X n + h by MBB 501 times, choose the median of the obtained results as the value of X ^ n + h , and then calculate the FMAE. According to the report in Table 7, it can be observed that the models with random coefficients outperform the models with a constant coefficient, which is consistent with the hypothesis testing result we have obtained. Additionally, under the same assumption for the thinning parameters, the models based on the negative binomial thinning operator work better than the models based on the binomial thinning operator, and the NBRCINAR(1) process has the best predictive performance. Therefore, we can conclude that our suggested model and method can be very helpful in some practical applications.

6.2. Poliomyelitis Data in USA

In this subsection, we turn to the data set that is comprised of the monthly number of poliomyelitis cases reported by the U.S. Centers for Disease Control. There are totally 168 observations, collected from January 1970 to December 1983. Figure 3 presents the sample path, the ACF and PACF of the time series. This data set has been used by Awale et al. [25] previously to test the constancy of the thinning parameter in a geometric INAR(1) models in which the distribution of the innovation ϵ t is given. Now, we relax this condition and apply the NBRCINAR(1) process (26), the NBINAR(1) process (27), the BRCINAR(1) process (28) and the BINAR(1) process (29) without specifying ϵ t to fit the Poliomyelitis data. With the method discussed in Section 4, we carried out the test for H 0 : σ 1 2 = 0 vs . H 1 : σ 1 2 > 0 , and the p-value turns out to be 0.4960. Hence, the same conclusion as Awale et al. [25] can be reached, i.e, we can not reject the null hypothesis of the constant thinning parameter at the significance levels of 5 % and 10 % . In order to verify this assertion, we estimate the parameters of interest and list the results in Table 8. Moreover, the predicted values are shown in Table 9, from which it can be seen that the models with the constant coefficient are more appropriate than the models with random coefficients for this poliomyelitis data set.

7. Possibility for Expansions

To handle the variability in thinning parameters that may arise owing to numerous external or internal causes, this paper offers an integer-valued time series model based on the negative binomial thinning operator (the NBRCINAR(1) process) to analyze such count data and devises a technique to evaluate the thinning parameter’s constancy, which is very important for us to determine whether or not we should apply the model with random coefficients.
One of the advantages of the method discussed in this paper is that we need not specify the distribution of the innovation ϵ t , so it is worthwhile to apply this method to other integer-valued time series models depending on the specific applications, to account for different circumstances or data features. Some straightforward and interesting generalizations for the NBRCINAR(1) process are as follows:
(1) The first-order generalized random coefficient integer-valued autoregressive (GRCINAR(1)) process proposed by Gomes and Canto [28], i.e.,
X t = ϕ t G X t 1 + ϵ t , t = 1 , 2 , ,
in which the generalized thinning operator is defined by
ϕ t G X t 1 | ϕ t , X t 1 G ( ϕ t X t 1 , δ t X t 1 ) ,
where G ( ϕ t X t 1 , δ t X t 1 ) represents a given discrete type distribution with mean ϕ t X t 1 and variance δ t X t 1 , respectively. It is obvious that the generalized thinning operator includes the binomial thinning operator, the negative binomial thinning operator, the expectation thinning operator and the Poisson thinning operator as its special cases.
(2) The first-order random coefficient mixed-thinning integer-valued autoregressive (RCMTINAR(1)) process was studied by Chang et al. [38], i.e.,
X t = ϕ t p X t 1 + ϵ t , t = 1 , 2 , ,
where the mixed thinning operator “ p ” represents
ϕ t p X t 1 = i = 1 X t 1 W i ( t ) ,
in which { W i ( t ) , t = 1 , 2 , , i = 1 , 2 , } is a counting series given as
W i ( t ) = B i ( t ) , with probability p , G i ( t ) , with probability 1 p , p [ 0 , 1 ] ,
with { B i ( t ) , i = 1 , 2 , } being a sequence of conditionally independent Bernoulli random variables, and { G i ( t ) , i = 1 , 2 , } , independent of { B i ( t ) , i = 1 , 2 , } given ϕ t , being a sequence of conditionally independent Geometric random variables.
(3) The first-order random coefficients self-exciting integer-valued threshold autoregressive (RCTINAR(1)) process investigated by Yang et al. [18,21], i.e.,
X t = ϕ 1 , t X t 1 + ϵ t , X t 1 r , ϕ 2 , t X t 1 + ϵ t , X t 1 > r , t = 1 , 2 , ,
in which “∘” is the binomial thinning operator, and r is the so-called threshold variable.
To test the randomness of the thinning parameters for these models, the attendant problem is how to estimate the parameter Var ( ϕ t ) , or Var ( ϕ 1 , t ) and Var ( ϕ 2 , t ) , as well as establish the related asymptotic behaviors of the estimators. We will focus on these issues in the future study.
On the other hand, our model and method considered in this paper have some limitations and constraints, which also could provide many interesting future studies. Let us discuss several topics as follows:
(1) Our results heavily rely on the assumptions (A1)–(A4), which may restrict the usefulness and applicability of our method. Therefore, it needs more attention to extend the proposed method to the models that relax the condition of independence for practice. For example, we can introduce the Markov-switching mechanism for { ϕ t , t = 1 , 2 , } like Lu and Wang [39] and explore the random coefficient models with dependent counting series { W i ( t ) , i = 1 , 2 , } like Liu and Zhang [19], or with serially dependent innovations { ϵ t , t = 1 , 2 , } like Shirozhan and Mohammadpour [40].
(2) As we can see from the simulation results in Section 5, the estimates of σ 1 2 , obtained in the second step of CLS method, converge to their true values at a slightly slow rate, and there is some gap between the empirical sizes and the designated significance levels for the test problems. Therefore, large sample sizes are needed to obtain good results. However, this requirement may be not met in practice. To improve the performance of statistical inference, we can apply the so-called locally most powerful-type test method developed by Awale et al. [25], but we have to fit the distribution of the innovation ϵ t first. We can also try the empirical likelihood test method to obtain the estimators of model parameters, and then consider the constancy test of the thinning parameters, see Lu and Wang [26] for example.

8. Conclusions

In this paper, we consider the first-order random coefficient integer-valued autoregressive (NBRCINAR(1)) process based on the negative binomial thinning operator. We obtain the estimators of model parameters by the two-step conditional least squares method, and derive their asymptotic properties. We also consider the constancy test of thinning parameters. The simulation study demonstrates the effectiveness of our suggested method. The real data analysis reveals that our suggested method can be useful in practice. Finally, some possible extensions of this paper are provided. We leave these issues as our future work.

Funding

This paper is supported by National Natural Science Foundation of China (No. 12226506).

Data Availability Statement

The original data of asymptomatic COVID-19 cases in China presented in the study are openly available on the official website of National Health Commission of the PRC http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml. The Poliomyelitis data in USA presented in the study are available in Zeger [41].

Acknowledgments

The author would like to thank the editor and the three referees for their very constructive and pertinent suggestions that improve this paper a lot.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Cox, D.R.; Miller, H.D. The Theory of Stochastic Processes; Methuen: London, UK, 1965. [Google Scholar]
  2. Jacobs, P.; Lewis, P. Stationary discrete autoregressive-moving average time series generated by mixtures. J. Time Ser. Anal. 1983, 4, 19–36. [Google Scholar] [CrossRef]
  3. Al-Osh, M.A.; Alzaid, A.A. First-order integer-valued autoregressive (INAR(1)) process. J. Time Ser. Anal. 1987, 8, 261–275. [Google Scholar] [CrossRef]
  4. Steutel, F.W.; van Harn, K. Discrete analogues of self-decomposability and stability. Ann. Probab. 1979, 7, 893–899. [Google Scholar] [CrossRef]
  5. Weiß, C.H. Thinning operations for modeling time series of counts—A survey. AStA Adv. Stat. Anal. 2008, 92, 319–343. [Google Scholar] [CrossRef]
  6. Scotto, M.G.; Weiß, C.H.; Gouveia, S. Thinning-based models in the analysis of integer-valued time series: A review. Stat. Model. 2015, 15, 590–618. [Google Scholar] [CrossRef]
  7. Weiß, C.H. An Introduction to Discrete-Valued Time Series; John Wiley and Sons Ltd.: Hoboken, NJ, USA, 2018. [Google Scholar]
  8. Weiß, C.H. Stationary count time series models. WIREs Comput. Stat. 2021, 13, e1502. [Google Scholar] [CrossRef]
  9. Ristić, M.M.; Bakouch, H.S.; Nastić, A.S. A new geometric first-order integer-valued autoregressive (NGINAR(1)) process. J. Stat. Plan. Inference 2009, 139, 2218–2226. [Google Scholar] [CrossRef]
  10. Ristić, M.M.; Nastić, A.S.; Bakouch, H.S. Estimation in an integer-valued autoregressive process with negative binomial marginals (NBINAR(1)). Commun. Stat.—Theory Methods 2012, 41, 606–618. [Google Scholar] [CrossRef]
  11. Yang, K.; Wang, D.H.; Jia, B.T.; Li, H. An integer-valued threshold autoregressive process based on negative binomial thinning. Stat. Pap. 2018, 59, 1131–1160. [Google Scholar] [CrossRef]
  12. Tian, S.Q.; Wang, D.H.; Cui, S. A seasonal geometric INAR process based on negative binomial thinning operator. Stat. Pap. 2020, 61, 2561–2581. [Google Scholar] [CrossRef]
  13. Wang, X.H.; Wang, D.H.; Yang, K.; Xu, D. Estimation and testing for the integer-valued threshold autoregressive models based on negative binomial thinning. Commun. Stat.—Simul. Comput. 2021, 50, 1622–1644. [Google Scholar] [CrossRef]
  14. Qian, L.Y.; Zhu, F.K. A new minification integer-valued autoregressive process driven by explanatory variables. Aust. N. Z. J. Stat. 2022, 64, 478–494. [Google Scholar] [CrossRef]
  15. Zheng, H.T.; Basawa, I.V.; Datta, S. Inference for pth-order random coefficient integer-valued autoregressive processes. J. Time Ser. Anal. 2006, 27, 411–440. [Google Scholar] [CrossRef]
  16. Zheng, H.T.; Basawa, I.V.; Datta, S. First-order random coefficient integer-valued autoregressive process. J. Stat. Plan. Inference 2007, 137, 212–229. [Google Scholar] [CrossRef]
  17. Zhang, H.X.; Wang, D.H.; Zhu, F.K. Empirical likelihood inference for random coefficient INAR(p) process. J. Time Ser. Anal. 2011, 32, 195–223. [Google Scholar] [CrossRef]
  18. Yang, K.; Li, H.; Wang, D.H.; Zhang, C.H. Random coefficients integer-valued threshold autoregressive processes driven by logistic regression. AStA Adv. Stat. Anal. 2021, 105, 533–557. [Google Scholar] [CrossRef]
  19. Liu, J.; Zhang, H.X. First-order random coefficient INAR process with dependent counting series. Commun. Stat.—Simul. Comput. 2022, 51, 3341–3354. [Google Scholar] [CrossRef]
  20. Li, H.; Liu, Z.J.; Yang, K.; Dong, X.G.; Wang, W.S. A pth-order random coefficients mixed binomial autoregressive process with explanatory variables. Comput. Stat. 2023. [Google Scholar] [CrossRef]
  21. Yang, K.; Li, A.; Yu, X.Y.; Dong, X.G. On MCMC sampling in random coefficients self-exciting integer-valued threshold autoregressive processes. J. Stat. Comput. Simul. 2024, 94, 164–182. [Google Scholar] [CrossRef]
  22. Yu, K.Z.; Tao, T.L. Consistent model selection procedure for random coefficient INAR models. Entropy 2023, 25, 1220. [Google Scholar] [CrossRef]
  23. Yu, M.J.; Wang, D.H.; Yang, K. A class of observation-driven random coefficient INAR(1) processes based on negative binomial thinning. J. Korean Stat. Soc. 2019, 48, 248–264. [Google Scholar] [CrossRef]
  24. Zhao, Z.W.; Hu, Y.D. Statistical inference for first-order random coefficient integer-valued autoregressive processes. J. Inequalities Appl. 2015, 2015, 359. [Google Scholar] [CrossRef]
  25. Awale, M.; Balakrishna, N.; Ramanathan, T.V. Testing the constancy of the thinning parameter in a random coefficient integer autoregressive model. Stat. Pap. 2019, 60, 1515–1539. [Google Scholar] [CrossRef]
  26. Lu, F.L.; Wang, D.H. A new estimation for INAR(1) process with Poisson distribution. Comput. Stat. 2022, 37, 1185–1201. [Google Scholar] [CrossRef]
  27. Nicholls, D.; Quinn, B. Random Coefficient Autoregressive Models: An Introduction; Springer: Berlin, Germany, 1982. [Google Scholar]
  28. Gomes, D.; Canto e Castro, L. Generalized integer-valued random coefficient for a first order structure autoregressive (RCINAR) process. J. Stat. Plan. Inference 2009, 139, 4088–4097. [Google Scholar] [CrossRef]
  29. Zhang, H.X.; Wang, D.H.; Zhu, F.K. Inference for INAR(p) processes with signed generalized power series thinning operator. J. Stat. Plan. Inference 2009, 140, 667–883. [Google Scholar] [CrossRef]
  30. Zheng, H.T.; Basawa, I.V. First-order observation-driven integer-valued autoregressive processes. Stat. Probab. Lett. 2008, 78, 1–9. [Google Scholar] [CrossRef]
  31. Hwang, S.; Basaea, I. Parameter estimation for generalized random coefficient autoregressive processes. J. Stat. Plan. Inference 1998, 68, 323–337. [Google Scholar] [CrossRef]
  32. Zhu, F.K.; Wang, D.H. Estimation of Parameters in the NLAR(p) Model. J. Time Ser. Anal. 2008, 29, 619–628. [Google Scholar] [CrossRef]
  33. Billingsley, P. The Lindeberg-Lévy theorem for martingales. Proc. Am. Math. Soc. 1961, 12, 788–792. [Google Scholar]
  34. Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods, 2nd ed.; Springer: New York, NY, USA, 1991. [Google Scholar]
  35. Karlsen, H.; Tjøstheim, D. Consistent estimates for the NEAR(2) and NLAR(2) time series models. J. R. Stat.-Soc.-Ser. B 1988, 50, 313–320. [Google Scholar] [CrossRef]
  36. Künsch, H. The Jackknife and the Boostrapt for general stationary observations. Ann. Stat. 1989, 17, 1217–1241. [Google Scholar] [CrossRef]
  37. Bisaglta, L.; Geroltmetto, M. Model-based INAR bootstrap for forecasting INAR(p) models. Comput. Stat. 2019, 34, 1815–1848. [Google Scholar] [CrossRef]
  38. Chang, L.Y.; Liu, X.F.; Wang, D.H.; Jing, Y.C.; Li, C.L. First-order random coefficient mixed-thinning integer-valued autoregressive model. J. Comput. Appl. Math. 2022, 410, 114222. [Google Scholar] [CrossRef]
  39. Lu, F.L.; Wang, D.H. First-order integer-valued autoregressive process with Markov-switching coefficients. Commun. Stat.—Theory Methods 2022, 51, 4313–4329. [Google Scholar] [CrossRef]
  40. Shirozhan, M.; Mohammadpour, M. A dependent counting INAR model with serially dependent innovation. J. Appl. Stat. 2021, 48, 1975–1997. [Google Scholar] [CrossRef]
  41. Zeger, S.L. A regression model for time series of counts. Biometrika 1988, 75, 621–629. [Google Scholar] [CrossRef]
Figure 1. QQ plots of estimator σ ^ 1 2 .
Figure 1. QQ plots of estimator σ ^ 1 2 .
Axioms 13 00260 g001
Figure 2. The sample path, ACF and PACF of the asymptomatic cases of COVID-19 in China.
Figure 2. The sample path, ACF and PACF of the asymptomatic cases of COVID-19 in China.
Axioms 13 00260 g002
Figure 3. The sample path, ACF and PACF of the poliomyelitis data in USA.
Figure 3. The sample path, ACF and PACF of the poliomyelitis data in USA.
Axioms 13 00260 g003
Table 1. Simulation results for α = 0.1 , β = 0.1 , λ = 1 and α = 0.1 , β = 0.1 , λ = 2 .
Table 1. Simulation results for α = 0.1 , β = 0.1 , λ = 1 and α = 0.1 , β = 0.1 , λ = 2 .
nParameter α = 0.1 , β = 0.1 , λ = 1 Parameter α = 0.1 , β = 0.1 , λ = 2
EstimateBIASMESEstimateBIASMSE
100 ϕ = 0.5 0.4030−0.09700.0324 ϕ = 0.5 0.4096−0.09040.0286
σ 1 2 = 0.2083 0.0380−0.17030.0350 σ 1 2 = 0.2083 0.0529−0.15540.0338
λ = 1 1.15500.15500.0893 λ = 2 2.29740.29740.3045
300 ϕ = 0.5 0.4527−0.04730.0139 ϕ = 0.5 0.4550−0.04500.0127
σ 1 2 = 0.2083 0.0680−0.14030.0279 σ 1 2 = 0.2083 0.0875−0.12080.0235
λ = 1 1.07610.07610.0347 λ = 2 2.14350.14350.1260
500 ϕ = 0.5 0.4689−0.03110.0102 ϕ = 0.5 0.4749−0.02510.0086
σ 1 2 = 0.2083 0.0864−0.12190.0228 σ 1 2 = 0.2083 0.1085−0.09980.0179
λ = 1 1.04780.04780.0254 λ = 2 2.08950.08950.0942
1000 ϕ = 0.5 0.4789−0.02110.0059 ϕ = 0.5 0.4850−0.01500.0047
σ 1 2 = 0.2083 0.1149−0.09340.0157 σ 1 2 = 0.2083 0.1401−0.06820.0112
λ = 1 1.03870.03870.0159 λ = 2 2.04890.04890.0501
2000 ϕ = 0.5 0.4861−0.01390.0033 ϕ = 0.5 0.4928−0.00720.0028
σ 1 2 = 0.2083 0.1332−0.07510.0111 σ 1 2 = 0.2083 0.1597−0.04860.0068
λ = 1 1.02280.02280.0086 λ = 2 2.02650.02650.0308
5000 ϕ = 0.5 0.4965−0.00350.0015 ϕ = 0.5 0.4970−0.00300.0012
σ 1 2 = 0.2083 0.1649−0.04340.0054 σ 1 2 = 0.2083 0.1800−0.02830.0034
λ = 1 1.00440.00440.0044 λ = 2 2.01180.01180.0140
Table 2. Simulation results for α = 0.1 , β = 0.15 , λ = 1 and α = 0.1 , β = 0.15 , λ = 2 .
Table 2. Simulation results for α = 0.1 , β = 0.15 , λ = 1 and α = 0.1 , β = 0.15 , λ = 2 .
nParameter α = 0.1 , β = 0.15 , λ = 1 Parameter α = 0.1 , β = 0.15 , λ = 2
EstimateBIASMESEstimateBIASMSE
100 ϕ = 0.4 0.3179−0.08210.0290 ϕ = 0.4 0.3238−0.07620.0278
σ 1 2 = 0.1920 0.0258−0.16620.0314 σ 1 2 = 0.1920 0.0318−0.16020.0300
λ = 1 1.10720.10720.0566 λ = 2 2.20930.20930.2294
300 ϕ = 0.4 0.3622−0.03780.0120 ϕ = 0.4 0.3694−0.03060.0119
σ 1 2 = 0.1920 0.0531−0.13800.0249 σ 1 2 = 0.1920 0.0629−0.12910.0230
λ = 1 1.04800.04800.0219 λ = 2 2.08920.08920.0905
500 ϕ = 0.4 0.3712−0.02880.0087 ϕ = 0.4 0.3778−0.02220.0080
σ 1 2 = 0.1920 0.0597−0.13230.0227 σ 1 2 = 0.1920 0.0760−0.11600.0192
λ = 1 1.04210.04210.0174 λ = 2 2.05730.05730.0604
1000 ϕ = 0.4 0.3830−0.01700.0055 ϕ = 0.4 0.3859−0.01410.0043
σ 1 2 = 0.1920 0.0836−0.10840.0176 σ 1 2 = 0.1920 0.1045−0.08750.0129
λ = 1 1.02270.02270.0108 λ = 2 2.03860.03860.0353
2000 ϕ = 0.4 0.3925−0.00750.0029 ϕ = 0.4 0.3954−0.00460.0025
σ 1 2 = 0.1920 0.1056−0.08640.0121 σ 1 2 = 0.1920 0.1271−0.06490.0082
λ = 1 1.01050.01050.0061 λ = 2 2.01220.01220.0201
5000 ϕ = 0.4 0.3970−0.00300.0013 ϕ = 0.4 0.3978−0.00220.0011
σ 1 2 = 0.1920 0.1387−0.05330.0063 σ 1 2 = 0.1920 0.1492−0.04280.0044
λ = 1 1.00370.00370.0027 λ = 2 2.00590.00590.0091
Table 3. Simulation results for α = 0.15 , β = 0.1 , λ = 1 and α = 0.15 , β = 0.1 , λ = 2 .
Table 3. Simulation results for α = 0.15 , β = 0.1 , λ = 1 and α = 0.15 , β = 0.1 , λ = 2 .
nParameter α = 0.15 , β = 0.1 , λ = 1 Parameter α = 0.15 , β = 0.1 , λ = 2
EstimateBIASMESEstimateBIASMSE
100 ϕ = 0.6 0.4920−0.10800.0343 ϕ = 0.6 0.5084−0.09160.0291
σ 1 2 = 0.1920 0.0532−0.13880.0292 σ 1 2 = 0.1920 0.0822−0.10980.0275
λ = 1 1.20040.20040.1267 λ = 2 2.36710.36710.4273
300 ϕ = 0.6 0.5428−0.05720.0140 ϕ = 0.6 0.5542−0.04580.0108
σ 1 2 = 0.1920 0.0921−0.09990.0215 σ 1 2 = 0.1920 0.1186−0.07340.0184
λ = 1 1.11240.11240.0538 λ = 2 2.19170.19170.1665
500 ϕ = 0.6 0.5615−0.03850.0096 ϕ = 0.6 0.5714−0.02860.0075
σ 1 2 = 0.1920 0.1046−0.08740.0183 σ 1 2 = 0.1920 0.1356−0.05640.0146
λ = 1 1.07320.07320.0358 λ = 2 2.12850.12850.1184
1000 ϕ = 0.6 0.5615−0.03850.0096 ϕ = 0.6 0.5793−0.02070.0044
σ 1 2 = 0.1920 0.1338−0.05820.0136 σ 1 2 = 0.1920 0.1532−0.03880.0101
λ = 1 1.04460.04460.0222 λ = 2 2.08240.08240.0693
2000 ϕ = 0.6 0.5851−0.01490.0030 ϕ = 0.6 0.5924−0.00760.0022
σ 1 2 = 0.1920 0.1496−0.04240.0093 σ 1 2 = 0.1920 0.1623−0.02970.0073
λ = 1 1.02760.02760.0117 λ = 2 2.03620.03620.0390
5000 ϕ = 0.6 0.5939−0.00610.0014 ϕ = 0.6 0.5956−0.00440.0009
σ 1 2 = 0.1920 0.1664−0.02560.0056 σ 1 2 = 0.1920 0.1739−0.01810.0041
λ = 1 1.01300.01300.0060 λ = 2 2.01910.01910.0161
Table 4. Empirical power of the test at significance levels τ = 0.05 and 0.10 .
Table 4. Empirical power of the test at significance levels τ = 0.05 and 0.10 .
LevelParameter n = 100 n = 500 n = 1000 n = 2000 n = 5000 n = 10,000
( α , β , λ )
0.10 ( 0.1 , 0.1 , 1 ) 0.2480.3020.4620.6520.9020.973
( 0.1 , 0.1 , 2 ) 0.2560.4320.6390.8120.9620.990
( 0.1 , 0.15 , 1 ) 0.2330.2710.3020.4750.8120.943
( 0.1 , 0.15 , 2 ) 0.2080.2580.4400.7480.9230.983
( 0.15 , 0.1 , 1 ) 0.2840.3740.4840.6330.8580.968
( 0.15 , 0.1 , 2 ) 0.3030.4590.5990.7450.9480.999
0.05 ( 0.1 , 0.1 , 1 ) 0.1580.2340.3930.5770.8440.945
( 0.1 , 0.1 , 2 ) 0.1720.3570.5520.7380.9230.975
( 0.1 , 0.15 , 1 ) 0.1300.1500.2000.3700.6990.890
( 0.1 , 0.15 , 2 ) 0.1220.1780.3140.5200.8570.967
( 0.15 , 0.1 , 1 ) 0.2100.3090.4100.5360.7670.943
( 0.15 , 0.1 , 2 ) 0.2370.3980.5000.6380.8930.975
Table 5. Empirical size of the test at significance levels τ = 0.05 and 0.10 .
Table 5. Empirical size of the test at significance levels τ = 0.05 and 0.10 .
LevelParameter n = 100 n = 500 n = 1000 n = 2000 n = 5000 n = 10,000 n = 20,000
( ϕ , λ )
0.10 ( 0.5 , 1 ) 0.0370.0330.0340.0310.0380.0450.057
( 0.5 , 2 ) 0.0370.0290.0330.0480.0490.0580.065
( 0.4 , 1 ) 0.0330.0330.0420.0370.0430.0420.059
( 0.4 , 2 ) 0.0370.0280.0330.0480.0490.0580.063
( 0.6 , 1 ) 0.0390.0430.0350.0390.0430.0400.052
( 0.6 , 2 ) 0.0360.0320.0270.0320.0430.0480.065
0.05 ( 0.5 , 1 ) 0.0210.0220.0120.0110.0150.0170.026
( 0.5 , 2 ) 0.0200.0140.0160.0250.0190.0270.031
( 0.4 , 1 ) 0.0190.0150.0160.0170.0150.0150.023
( 0.4 , 2 ) 0.0220.0140.0160.0250.0190.0270.034
( 0.6 , 1 ) 0.0280.0130.0150.0170.0130.0150.027
( 0.6 , 2 ) 0.0170.0170.0120.0050.0140.0160.023
Table 6. Estimate of parameters in the models for the asymptomatic cases of COVID-19 in China.
Table 6. Estimate of parameters in the models for the asymptomatic cases of COVID-19 in China.
Parameters
Model ϕ α β λ
NBINAR(1) and BINAR(1)0.5788--9.3869
NBRCINAR(1) and BRCINAR(1)0.57882.50271.82099.3869
Table 7. Forecasting performance results for the asymptomatic cases of COVID-19 in China.
Table 7. Forecasting performance results for the asymptomatic cases of COVID-19 in China.
Date1 September2 September3 September4 September5 SeptemberFMAE
Observations4428201613-
Forecast-CE21.542521.856522.038222.143422.20439.1974
Forecast-MBB for NBRCINAR(1)19201918198.4000
Forecast-MBB for NBINAR(1)18191919209.2000
Forecast-MBB for BRCINAR(1)19191919198.8000
Forecast-MBB for BINAR(1)17171818189.4000
Table 8. Estimate of parameters in the models for the poliomyelitis data in USA.
Table 8. Estimate of parameters in the models for the poliomyelitis data in USA.
Parameters
Model ϕ α β λ
NBINAR(1) and BINAR(1)0.2540--0.9720
NBRCINAR(1) and BRCINAR(1)0.25402.26546.65210.9720
Table 9. Forecasting performance results for the poliomyelitis dat in USA.
Table 9. Forecasting performance results for the poliomyelitis dat in USA.
TimeAugust 1983September 1983October 1983November 1983December 1983FMAE
Observations10136-
Forecast-CE1.47671.34421.31061.30211.29991.7059
Forecast-MBB for NBRCINAR(1)111111.6000
Forecast-MBB for NBINAR(1)211221.4000
Forecast-MBB for BRCINAR(1)121111.8000
Forecast-MBB for BINAR(1)111111.6000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, S. Randomness Test of Thinning Parameters for the NBRCINAR(1) Process. Axioms 2024, 13, 260. https://doi.org/10.3390/axioms13040260

AMA Style

Zhang S. Randomness Test of Thinning Parameters for the NBRCINAR(1) Process. Axioms. 2024; 13(4):260. https://doi.org/10.3390/axioms13040260

Chicago/Turabian Style

Zhang, Shuanghong. 2024. "Randomness Test of Thinning Parameters for the NBRCINAR(1) Process" Axioms 13, no. 4: 260. https://doi.org/10.3390/axioms13040260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop