Next Article in Journal
Regular Frames for Spherically Symmetric Black Holes Revisited
Next Article in Special Issue
An Iteration Algorithm for American Options Pricing Based on Reinforcement Learning
Previous Article in Journal
Electron Diffraction Study of the Space Group Variation in the Al–Mn–Pt T-Phase
Previous Article in Special Issue
First-Order Random Coefficient Multinomial Autoregressive Model for Finite-Range Time Series of Counts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Bivariate Random Coefficient INAR(1) Model with Applications

1
College of Mathematics, Changchun Normal University, Changchun 130032, China
2
School of Mathematics and Statistics, Henan University, Kaifeng 475004, China
3
College of Mathematics, Taiyuan University of Technology, Taiyuan 030024, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(1), 39; https://doi.org/10.3390/sym14010039
Submission received: 25 October 2021 / Revised: 12 November 2021 / Accepted: 20 November 2021 / Published: 29 December 2021

Abstract

:
Excess zeros is a common phenomenon in time series of counts, but it is not well studied in asymmetrically structured bivariate cases. To fill this gap, we first considered a new first-order, bivariate, random coefficient, integer-valued autoregressive model with a bivariate innovation, which follows the asymmetric Hermite distuibution with five parameters. An attractive advantage of the new model is that the dependence between series is achieved by innovative parts and the cross-dependence of the series. In addition, the time series of counts are modeled with excess zeros, low counts and low over-dispersion. Next, we established the stationarity and ergodicity of the new model and found its stochastic properties. We discuss the conditional maximum likelihood (CML) estimate and its asymptotic property. We assessed finite sample performances of estimators through a simulation study. Finally, we demonstrate the superiority of the proposed model by analyzing an artificial dataset and a real dataset.

1. Introduction

Much effort has recently been put into the study of integer-valued time series models, a particular class of which are known as univariate constant coefficient INAR(1) models [1]; see [2,3,4] for recent reviews on this topic. However, the autoregressive coefficient may be affected by various environmental factors in practice; thus, random coefficient integer-valued autoregressive models have been proposed, such as the random coefficient INAR(1) model [5], and it was generalized to the p-order case by [6]; see [7,8,9,10,11,12,13] for recent developments.
Bivariate count data occur in many contexts, often as the counts of two asymmetric events, objects or individuals during a certain period of time. For example, such counts occur in epidemiology when two kinds of related diseases are examined, and in criminology when two kinds of crimes are considered. The multivariate constant coefficient INAR model was proposed by [14]. The model is defined through the thinning operator “∘” due to [15]. Let A be a 2 × 2 matrix with entries α i j satisfying α i j ( 0 , 1 ) , for i , j = 1 , 2 and X = ( X 1 , X 2 ) be an integer-valued random vector. Then
( A X ) i = j = 1 2 α i j X j ,
where α i j X j is a univariate binomial thinning operator, and i , j , α i j X j are mutually independent. Then the bivariate INAR(1) model (BINAR(1)) is defined as
X t = A X t 1 + ϵ t , t Z ,
where ϵ t is a sequence of i.i.d. non-negative integer-valued random vectors with a finite mean and finite variance, and it is independent with A X t 1 ; see [14] for details. Throughout the rest of the paper, ϵ t will be referred to as innovational vectors.
Pedeli and Karlis [16] discussed a tractable BINAR(1) model, and their emphasis was placed on models with bivariate Poisson and bivariate negative binomial innovational vectors, which can be used to deal with bivariate count data with equi-dispersion and over-dispersion, but with little flexibility. Pedeli and Karlis [17] gave a further discussion of properties of the multivariate INAR(1) model. Based on a finite range of counts, Scotto et al. [18] considered the density-dependent bivariate binomial autoregressive models by using their state-dependent thinning concept. Popović [19] proposed a BINAR(1) model whose survival components are generated by different binomial distributions.
The above models assumed that parameter α i j is not affected by various environmental factors. Based on this point, a new bivariate random coefficient INAR(1) model was proposed. Popović [20] proposed a bivariate INAR(1) model with random coefficients, but it ignores the cross-correlation of the process { X t } and it assumes random coefficients follow a specified binomial distribution. Cui and Zhu [21] proposed a bivariate integer-valued GARCH with flexible cross-correlation, which is other direction. See [22] for the bivariate Poisson integer-valued GARCH model. Excess zeros is a common phenomenon in time series of counts, and it has been addressed by many authors, such as [23,24], but it is not well studied in the bivariate case. The motivation of this paper was to extend the Popović’s model and propose a new bivariate random coefficient INAR(1) model, which provides a method to tackle the data with excess zeros, low counts and low over-dispersion.
This paper is organized as follows. We first recall the asymmetric bivariate Hermite distribution, and a bivariate binomial thinning operator with random coefficients, including some basic properties; then we propose the new model and discuss its stochastic properties in Section 2. Section 3 uses the conditional maximum likelihood (CML) to estimate unknown parameters and discusses the asymptotic properties of their CML estimators. A simulation and two examples are given in Section 4 and Section 5, respectively. Conclusions are drawn in Section 6. All proofs of theorems and propositions, including some auxiliary results are given in Appendix A.

2. A New Bivariate Random Coefficient INAR(1) Model

The asymmetric bivariate Hermite distribution with five parameters was proposed by Kemp and Papageorgiou [25], and its probability mass function takes the following form:
P ( X = x , Y = y ) = exp r = 1 5 λ r k = 0 min ( x , y ) λ 5 k k ! i = 0 [ x k 2 ] λ 1 x k 2 i λ 2 i ( x k 2 i ) ! i ! j = 0 [ y k 2 ] λ 3 y k 2 j λ 4 j ( y k 2 j ) ! j ! ,
for random variables X and Y. For convenience, we abbreviate it as BH ( λ 1 , λ 2 , λ 3 , λ 4 , λ 5 ) .
( X , Y ) has the following probability generating function:
G X , Y ( s 1 , s 2 ) = exp { λ 1 ( s 1 1 ) + λ 2 ( s 1 2 1 ) + λ 3 ( s 2 1 ) + λ 4 ( s 2 2 1 ) + λ 5 ( s 1 s 2 1 ) } ,
if ( X , Y ) BH ( λ 1 , λ 2 , λ 3 , λ 4 , λ 5 ) . From the probability generating function of ( X , Y ) , we have the following properties.
Properties: If ( X , Y ) follows BH ( λ 1 , λ 2 , λ 3 , λ 4 , λ 5 ) , then
(i)
Cov ( X , Y ) = λ 5 ; i.e., λ 5 is a measure of dependence between X and Y.
(ii)
The marginal distributions of X and Y are two univariate Hermite distributions with parameters ( λ 1 + λ 5 ) + 2 λ 2 and ( λ 3 + λ 5 ) + 2 λ 4 , respectively.
(iii)
There exist mutually independent random variables X 1 , X 2 , , X 5 such that X = X 1 + 2 X 2 + X 5 and Y = X 3 + 2 X 4 + X 5 , where X i follows Poisson ( λ i ), i = 1 , 2 , 3 , 4 , 5 .
The index of dispersion I X = Var ( X ) E ( X ) = 1 + 2 λ 2 λ 1 + 2 λ 2 + λ 5 ( 1 , 2 ) and I Y = Var ( Y ) E ( Y ) = 1 + 2 λ 4 λ 3 + 2 λ 4 + λ 5 ( 1 , 2 ) ; thus, X and Y are slightly over-dispersed random variables. Note that the univariate Hermite distribution is Poisson zero-inflated by P ( Z = 0 ) exp { E ( Z ) } , Z ; see [7,26] for more discussion on the univariate Hermite distribution. Hence, the bivariate Hermite distribution provides a good strategy to model series with low counts, excess zeros and slight over-dispersion by property (ii).
Applied to the bivariate case with X = ( X 1 , X 2 ) , the thinning concept of [15] leads to the operation:
A t X = α 11 ( t ) X 1 + α 12 ( t ) X 2 α 21 ( t ) X 1 + α 22 ( t ) X 2 , t Z .
where α i j ( t ) X j is a random univariate binomial thinning operator, i , j = 1 , 2 , and the counting series involving in α i j ( t ) X j are mutually independent. Denote E ( α i j ( t ) ) = α i j , Var ( α i j ( t ) ) = σ i j 2 . Then, we obtain the following properties of the random bivariate binomial thinning operator.
Proposition 1.
For t 1 ,
(1) 
E ( ( A t X ) ( A t X ) X ) = A X X A + V 2 ( X 1 2 , X 2 2 ) + ( V 1 V 2 ) ( X 1 , X 2 ) and E { ( A t X ) ( A t X ) } = A E ( X X ) A + diag V 2 U 2 + diag ( V 1 V 2 ) U 1 , where V 1 = α i j ( 1 α i j ) , V 2 = σ α i j 2 , U 1 = E X 1 , E X 2 , U 2 = E X 1 2 , E X 2 2 .
(2) 
Cov ( A t X , Y ) = A Cov ( X , Y ) and Cov ( Y , A t X ) = Cov ( X , Y ) A .
Pedeli and Karlis [16] restricted α 12 ( t ) = α 21 ( t ) = 0 and α i i ( t ) = α i i ( i = 1 , 2 ) such that ( A t X ) i had the same distribution as that of α i i ( t ) X i ; thus, its marginal models behaved like the univariate thinning operation. However, an important limitation is that cross-correlation in the counts is ignored; i.e.,
Cov ( ( A t X ) 1 , ( A t X ) 2 ) = Cov ( α 11 X 1 , α 22 X 2 ) = α 11 α 22 Cov ( X 1 , X 2 ) .
Pedeli and Karlis [17] proposed the full bivariate INAR(1) model with bivariate Poisson innovations which consider the cross-correlation in the counts, but it keeps the innovations followed a bivariate Poisson distribution. In addition, models in Pedeli and Karlis [16,17] restricted A t to be a constant matrix such that the effects of various environmental factors are ignored. Popović [20] proposed a bivariate INAR(1) model which was comprised by random coefficients, but the cross-correlation of { X t , t Z } was ignored. Therefore, a novel extension of he above BINAR(1) model is proposed which allows autoregressive coefficients to vary over time, and their innovation follows the asymmetric bivariate Hermite distribution. This model will be called the BHRCINAR(1) model, and its definition is given as follows.
Definition 1.
Let X t = X 1 t , X 2 t be a non-negative integer-valued bivariate random vector. Then the sequence { X t } is said to be a BHRCINAR(1) process if { X t } satisfies
X t = A t X t 1 + ϵ t , t Z ,
where
(i) 
A t X t 1 is a random bivariate binomial thinning operator;
(ii) 
ϵ t = ϵ 1 t , ϵ 2 t follows BH ( λ 1 , λ 2 , λ 3 , λ 4 , λ 5 ) ;
(iii) 
If X 1 , t 1 and X 2 , t 1 are given, the counting series in A t X t 1 are independent, and independent with ϵ t .
The ith equation of model (2) is presented by
X i t = α i 1 ( t ) X 1 , t 1 + α i 2 ( t ) X 2 , t 1 + ϵ i t , i = 1 , 2 .
Notice that the model given in (3) is similar to the HINAR(1) model given in [7]; the main difference is that X i t involves two paralleled survivors X 1 , t 1 and X 2 , t 1 . It is known that the BHRCINAR(1) process { X t , t Z } has two parts: the first part consists of the survivors A t X t 1 of the system at the preceding time t 1 , and the survival rate A t varies over time; the other part is comprised by innovation vectors ϵ t which captures such counts with many zeros, low counts and low over-dispersion. See Liu et al. [11] for the p-order random coefficient INAR model.
Remark 1.
(i) 
If A t is a diagonal and constant matrix and λ 2 = λ 4 = 0 , model (2) is reduced to the bivariate INAR(1) model with bivariate Poisson innovations; see [16] for detail.
(ii) 
If A t is a non-diagonal and constant matrix and λ 2 = λ 4 = 0 , model (2) is reduced to the full bivariate INAR(1) model with bivariate Poisson innovations; see [17] for detail.
(iii) 
If A t is a diagonal matrix and P ( α i i ( t ) = α i ) = 1 P ( α i i ( t ) = 0 ) = p i ( i = 1 , 2 ) and λ 2 = λ 4 = 0 , model (2) is reduced to that given in [20].
Denote ς = ς 1 , ς 2 and ϑ = ϑ 1 , ϑ 2 . The conditional probability distribution of BHRCINAR(1) process can be expressed as follows:
P ( X t = ς X t 1 = ϑ ) = k = 0 g 1 s = 0 g 2 ϑ 1 j 1 ϑ 2 k j 1 ϑ 1 j 2 ϑ 2 s j 2 f 1 ( k ) f 2 ( s ) f 3 ( ς 1 k , ς 2 s ) ,
where f 1 ( k ) = j 1 = 0 k 0 1 ( α 11 ( 1 ) ) j 1 ( 1 α 11 ( 1 ) ) ϑ 1 j 1 d P α 11 ( 1 ) 0 1 ( α 12 ( 1 ) ) k j 1 ( 1 α 12 ( 1 ) ) ϑ 2 k + j 1 d P α 12 ( 1 ) , f 2 ( s ) = j 2 = 0 s 0 1 ( α 21 ( 1 ) ) j 2 ( 1 α 21 ( 1 ) ) ϑ 1 j 2 d P α 21 ( 1 ) 0 1 ( α 22 ( 1 ) ) s j 2 ( 1 α 22 ( 1 ) ) ϑ 2 s + j 2 d P α 22 ( 1 ) , f 3 ( x 1 t k , x 2 t s ) = P ( ϵ 1 t = ς 1 k , ϵ 2 t = ς 2 s ) with g 1 = min ( ς 1 , ϑ 1 ) , g 2 = min ( ς 2 , ϑ 2 ) and θ = { α i j , λ k } , i , j = 1 , 2 , k = 1 , 2 , , 5 .
To establish the stationary properties of the new model, we make the following two assumptions.
Assumption 1.
(i) Let E ( α i j ( t ) ) = α i j , Var ( α i j ( t ) ) = σ α i j 2 , γ = max ( α i j 2 + σ α i j 2 ) < 1 and κ ̲ E ( α i j ( t ) ) 4 κ ¯ , where κ ̲ and κ ¯ are finite positive constants; (ii) For any i j , A t i is independent with A t j ; (iii) Let E ( A t ) = A , and the largest eigenvalue of non-negative matrix A is less than 1.
Hence, the bivariate marginal distribution in terms of the bivariate innovation vectors of model (2) takes the following form:
X t = d i = 0 k A t i X t k + i = 0 k 1 j = 0 i A t j ϵ t ( j + 1 ) + ϵ t . = d i = 0 j = 1 i A t j ϵ t ( j + 1 ) + ϵ t .
Assumption 2.
The parametric space Θ is compact with Θ = { θ | θ = { α i j , λ k } , α ̲ α i j α ¯ , λ ̲ λ k λ ¯ , i , j = 1 , 2 , k = 1 , 2 , , 5 } where α ̲ , α ¯ , λ ̲ and λ ¯ are finite positive constants, and θ 0 is an interior point in Θ.
Assumption 2 implies E ( ϵ 1 t ) 4 2 8 λ ¯ < by inequality E ( X + Y ) p 2 p 1 ( E ( X p ) + E ( Y ) p ) , p > 1 . Similarly, we have E ( ϵ 2 t ) 4 2 8 λ ¯ < . Hence, E ( X i t ) 4 < , i = 1 , 2 . D e n o t e Var ( ϵ t ) = Σ = σ ϵ 1 2 σ 12 σ 21 σ ϵ 2 2 and κ i = j = 1 2 ( α i j 2 + σ α i j 2 ) μ j 2 + [ α i j ( 1 α i j ) σ α i j 2 ] μ j + σ ϵ i 2 , where σ ϵ 1 2 = λ 1 + 4 λ 2 + λ 5 , σ ϵ 2 2 = λ 3 + 4 λ 4 + λ 5 and σ 12 = σ 21 = λ 5 . This also implies that they are all finite. In addition, γ is also finite under the assumption κ ̲ E ( α i j ( t ) ) 4 κ ¯ .
Under the above two assumptions, BHRCINAR(1) admits a strictly stationary and an ergodic solution, which are shown in the following theorem.
Theorem 1.
If Assumptions 1 and 2 hold, { X t } defined by equation (2) is a strictly stationary and an ergodic Markov chain on N 0 .
Moments of (2) are obtained and given as follows.
Proposition 2.
Let { X t } follow (2) and Assumptions 1 and 2 hold; then
(1) 
E ( X t X t 1 ) = A X t 1 + E ( ϵ t ) .
(2) 
If E ( X 0 ) = ( I A ) 1 E ( ϵ t ) , E ( X t ) = ( I A ) 1 E ( ϵ t ) , where elements of the vectors E ( ϵ t ) are E ( ϵ 1 t ) = λ 1 + 2 λ 2 + λ 5 and E ( ϵ 2 t ) = λ 3 + 2 λ 4 + λ 5 , and I is an identity matrix.
(3) 
Var ( X k t X t 1 ) = s = 1 2 σ α k s 2 X s , t 1 2 + s = 1 2 α k s ( 1 α k s ) σ α k s 2 X s , t 1 + σ ϵ k 2 , k = 1 , 2 , i.e.,
Var ( X t X t 1 ) = V 2 ( X 1 , t 1 2 , X 2 , t 1 2 ) + ( V 1 V 2 ) ( X 1 , t 1 , X 2 , t 1 ) + Var ( ϵ t ) ,
where V 1 = α i j ( 1 α i j ) 2 × 2 ,   V 2 = ( σ α i j 2 ) 2 × 2 .
(4) 
Denote R ( k ) = Cov ( X t + k , X t ) , k = 0 , 1 , 2 , 3 , . Then, we have R ( k ) = A k R ( 0 ) , where R ( 0 ) = A R ( 0 ) A + diag V 2 U 2 + diag ( V 1 V 2 ) U 1 + Σ with U 1 = E X 1 , E X 2 and U 2 = E X 1 2 , E X 2 2 , j = 1 , 2 .
According to (1) and (2), elements of vector of expectations are
E ( X 1 t ) = ( 1 α 22 ) E ( ϵ 1 t ) + α 12 E ( ϵ 2 t ) ( 1 α 11 ) ( 1 α 22 ) α 12 α 21 and E ( X 2 t ) = ( 1 α 11 ) E ( ϵ 2 t ) + α 21 E ( ϵ 1 t ) ( 1 α 11 ) ( 1 α 22 ) α 12 α 21 .
In addition, R ( 0 ) consists of
R 11 ( 0 ) = Var ( X 1 t ) = σ α 12 2 Var ( X 2 t ) + j = 1 2 { σ α 1 j 2 ( E ( X j t ) ) 2 + [ α 1 j ( 1 α 1 j ) σ α 1 j 2 ] E ( X j t ) } + 2 α 11 α 12 R 12 ( 0 ) + σ ϵ 1 2 1 ( α 11 2 + σ α 11 2 ) , R 22 ( 0 ) = Var ( X 2 t ) = σ α 21 2 Var ( X 1 t ) + j = 1 2 { σ α 2 j 2 ( E ( X j t ) ) 2 + [ α 2 j ( 1 α 2 j ) σ α 2 j 2 ] E ( X j t ) } + 2 α 21 α 22 R 21 ( 0 ) + σ ϵ 2 2 1 ( α 22 2 + σ α 22 2 ) , and R 12 ( 0 ) = R 21 ( 0 ) = Cov ( X 1 t , X 2 t ) = α 11 α 21 Var ( X 1 t ) + α 22 α 12 Var ( X 2 t ) + λ 5 1 α 11 α 22 α 12 α 21 .
Remark 2.
According to (5), we obtain E ( X t ) = A t E ( X 0 ) + ( I A ) 1 ( I A t ) E ( ϵ t ) and E ( X t X t k ) = A k X t k + ( I A ) 1 ( I A k ) E ( ϵ t ) by the fact E ( i = 0 k 1 A t i ) = A k , and E i = 0 k 1 j = 0 i A t j ϵ t ( j + 1 ) + ϵ t = i = 0 k 1 A i E ( ϵ t ) = ( I A ) 1 ( I A ) k E ( ϵ t ) , where A 0 is an identity matrix. Thus, the conditional expectation converges to unconditional expectation as k by assuming that the largest eigenvalue of non-negative matrix A is less than 1.

3. Parameter Estimation

Let θ = α i j , λ k , i , j = 1 , 2 and k = 1 , 2 , , 5 ; and X 0 , X 1 , , X T are generated by the BHRCINAR(1) model with true parameter value θ 0 and sample size T.
Then the CML estimator θ ^ m l is obtained by maximizing
( θ ) = t = 1 T log P θ X t X t 1 = t = 2 T log k = 0 g 1 s = 0 g 2 j 1 = 0 k X 1 , t 1 j 1 X 2 , t 1 k j 1 f 1 ( k , j 1 ) j 2 = 0 s X 1 , t 1 j 2 X 2 , t 1 s j 2 f 2 ( s , j 2 ) f 3 ,
where g 1 = min ( X 1 t , X 1 , t 1 ) , g 2 = min ( X 2 t , X 2 , t 1 ) and g 3 = min ( X 1 t k , X 2 t s ) .
Let P θ 0 be the probability measure under the true parameter θ 0 , and unless otherwise indicated, E ( · ) k is taken under θ 0 , k 1 . To analyze the sample properties of θ ^ m l , the following additional assumptions are made.
Assumption 3.
If there exists a t 1 , such that P θ 0 ( X t | X t 1 ) = P θ ( X t | X t 1 ) a.s., then θ = θ 0 , where a . s . denotes convergence almost surely.
Assumption 4.
(i) m = 0 + n = 0 + f 3 ( m , n ) < ; (ii) m = 0 + n = 0 + f 3 ( m , n ) λ k < , k = 1 , 2 , , 5 ; (iii) m = 0 + n = 0 + 2 f 3 ( m , n ) λ i λ j < , i , j = 1 , 2 , , 5 .
Lemmas A1 and A2 in the Appendix A establish the identification of model (2). First-order and second-order derivatives can be obtained by Lemma A3 in the Appendix A.
Theorem 2.
Let { X t } be defined by (2) and Assumptions 1–4 hold; then, as T ,
(1). There exists an unique estimator θ ^ m l such that θ ^ m l θ 0 in probability;
(2). T ( θ ^ m l θ 0 ) d N 0 , ( J ( θ 0 ) ) 1 I ( θ 0 ) ( J ( θ 0 ) ) 1 ,where I ( θ 0 ) = lim T T 1 E ( θ 0 ) θ ( θ 0 ) θ , J ( θ 0 ) = lim T T 1 E 2 ( θ 0 ) θ θ .

4. Simulation Study

In this section, we consider the finite sample property of the CML estimators by assuming that the distributions of α i j ( t ) follow a specific parametric model. Without loss of generality, we assume α i j ( t ) U ( 0 , 2 α i j ) , and let the theoretical mean be the initial value of the process for given parameters. When X 0 = { 9 , 9 } and θ = { 0.4 , 0.3 , 0.3 , 0.4 , 0.6 , 0.8 , 0.8 , 0.6 , 0.6 } are plots of sample path, their cross-correlation function (CCF) for the sample size 200 are given in Figure 1.
We conducted a simulation study for the non-diagonal BHRCINAR(1) model with the parameter combination ( 0.4 , 0.3 , 0.3 , 0.4 , 0.6 , 0.8 , 0.8 , 0.6 , 0.6 ) and the diagonal BHRCINAR(1) model with three sets of parameter combinations, i.e., ( A 1 ) = ( 0.4 , 0.3 , 0.8 , 0.6 , 0.6 , 0.8 , 0.4 ) ,   ( A 2 ) = ( 0.2 , 0.5 , 2 , 1 , 1 , 1.5 , 1 ) and ( A 3 ) = ( 0.35 , 0.35 , 2 , 1 , 1 , 1.5 , 1 ) , respectively. The numbers of samples were 50, 100 and 300 to reflect small, low and large sample sizes, and we used 200 replications.
For the simulated sample, performances of coefficient of variation (CV), standard deviation (SD), mean absolute deviation error (MADE) and bias are given in Table 1 and Table 2. To clearly present these results, we also give the graphical inspections for the non-diagonal BHRCINAR(1) model and the parameter combination (A1) for the diagonal model, which are given in Figure 2 and Figure 3; others are similar.
From Table 1 and Table 2, we can make the following observations. CV, SD, MADE and bias decreased as T increased for non-diagonal and diagonal models, where α i j , i , j = 1 , 2 decreased faster than λ k , k = 1 , 2 , , 5 and α i j was much closer to the original value for the same sample size, especially that for non-diagonal model. In general, the estimator of the parameter λ k tended to be positively biased in finite samples. In all cases, the parameter λ k was over-estimated compared with α i j , which is not surprising, since the value of α i j was the average value of the regression coefficient α i j t , t = 1 , 2 , and the regression coefficient varied over time. Regarding parameter λ k , the results indicate that the bias seems to be negligible when the sample size is large enough because the SD gradually decreases to zero, which is a very obvious phenomenon.
Figure 2 and Figure 3 show that there existed some slight fluctuations for CV, SD, bias and MADE for smaller sample sizes, but all of them had downward trends. Based on above discussions, these results illustrate the consistency of the estimators for larger sample size because the CV, SD, bias and MADE of the parameters rapidly decreased to zero as T increased, which was expected.
Figure 4 and Figure 5 give the boxplots of the estimates for the parameter combination (A1); others are similar.
Figure 4 and Figure 5 show that the estimates were consistent for both diagonal and non-diagonal models, because the range of bias gradually decreased as the sample size increased, which is much more obvious for the non-diagonal model.

5. Real Data Examples

To show the flexibility of the proposed model, we use an artificial example and a real example to illustrate our model.

5.1. Artificial Data Example

In this section, we first consider a simulated time series { X t } = { ( X 1 t , X 2 t ) , t = 1 , 2 , , T } of length T = 200 generated from (2) with θ = ( 0.2 , 0.1 , 0.1 , 0.3 , 0.6 , 0.2 , 0.1 , 0.6 , 0.5 ) . We can find that { X 1 t } and { X 2 t } have mean (variance) values equal to 2.4850 (2.8842) and 3.1450 (4.1648), respectively. Hence, both of { X 1 t } and { X 2 t } are slightly over-dispersed.
To illustrate the data structure, we give the histograms of the data series in Figure 6, in which the range of { X 1 t } is between 0 and 9 and that of { X 2 t } is between 0 and 11; there are 19 (15) zeros in series { X 1 t } ( { X 2 t } ).
The path data series and the CCF of two data series are given in Figure 7, and the sample autocorrelation function (ACF) and the sample partial autocorrelation function (PACF) of both { X 1 t } and { X 2 t } are provided in Figure 8.
To fit the data series, we used the proposed BHRCINAR(1) model, full BINAR(1)-BP model and full BINAR(1)-NB model; see [16,17] for details. The CML estimates and approximated standard errors (se) of parameters, including the fitted values of log-likelihood (Log-lik), AIC and root mean square errors (RMSE), are summarized in Table 3, where RMSE = 1 T 1 t = 2 T X t E θ ^ ( X t | X t 1 ) X t E θ ^ ( X t | X t 1 ) and se is computed by Theorem 2.
Table 3 shows that the BHRCINAR(1) model produced the smallest AIC and RMSE. Hence, the BHRCINAR(1) model is more suitable for the analyzed dataset. The resulting Pearson residual of the BHRCINAR(1) model is defined by
e 1 t = X 1 t E θ ^ ( X 1 t | X t 1 ) Var θ ^ ( X 1 t | X t 1 ) and e 2 t = X 2 t E θ ^ ( X 2 t | X t 1 ) Var θ ^ ( X 2 t | X t 1 ) .
The resulting mean and variance of the Pearson residuals of series { X 1 t } ( { X 2 t } ) were 0.0123 and 1.0012 ( 0.0067 and 1.0029 ), respectively. The residual analysis in Figure 9 adequately illustrates that the BHRCINAR(1) model did rather well.

5.2. Crime Data of Pittsburgh

In this section, we consider the counts of forgery and fraud on police car beat 12 of Pittsburgh from 1990 to 2001, which covers T = 144 observations. The data is download at 23 September 2016 from the website http://www.forecastingprinciples.com/index.php/crimedata.
Empirical mean values (variances) of forgery and fraud are 1.6111 (2.7428) and 4.000 (6.4615), which shows that both of the forgery and fraud counts are slightly over-dispersed. To account for the structure of the considered data, we give their histograms in Figure 10; their path and the CCF in Figure 11; and their ACF and PACF in Figure 12, respectively.
The range of series forgery is between 0 and 8 and that of series fraud is between 0 and 17. There are 45 (7) zeros in series forgery (fraud) and the two series are asymmetric but with significant dependencies.
The CML estimates and se of parameters, including the fitted values of log-likelihood (Log-lik), AIC and RMSE, are summarized in Table 4.
Table 4 shows that the BHRCINAR(1) model produced the smallest AIC and RMSE. Hence, the BHRCINAR(1) model is more suitable for the analyzed dataset. According to (7), the mean and variance of Pearson residuals of series forgery (fraud) are 0.0726 and 1.0552 ( 0.0021 and 1.0825 ), respectively. The residual analysis is given in Figure 13, which shows that this BHRCINAR(1) model can adequately describe the considered datasets.

6. Concluding Remarks

This paper considered a more flexible BHRCINAR(1) model for bivariate integer-valued time series data. The proposed model extends the random coefficient INAR(1) model to the 2-dimensional case, and it can be regarded as a generalization of the BINAR(1) model Pedeli and Karlis [17] or the model proposed by [20], but with more flexibility for dealing with low counts, excess zeros and slightly over-dispersed data. We discussed the essential properties of the model, the CML estimators and their asymptotic behavior. A simulation study was conducted to examine the finite sample performances of the estimators. A real data application was provided to illustrate our model to be effective relative to existing models.
Beyond this adaption to modeling bivariate time series with bivariate Hermite innovations, there are many further possibilities that deserve to be considered for future research. Local influence analysis will be changeling for the BINAR models in identifying influential observations; see [27,28]. To make the BINAR-type models more flexible in real applications, it may be interesting if the autoregressive coefficient is updated by using past information of the observed process; see [29]. In addition, including explanatory covariates in the model may be an available way to analyze bivariate time series with a certain trend, and will be considered in a future project.

Author Contributions

Conceptualization, Q.L. and H.C.; methodology, Q.L. and H.C.; software, H.C. and X.L.; validation, Q.L., H.C. and X.L.; formal analysis, H.C.; data curation, X.L.; writing—original draft preparation, Q.L. and H.C.; writing—review and editing, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Crime data is download on 23 September 2016 at http://www.forecastingprinciples.com/index.php/crimedata.

Acknowledgments

Li’s work is supported by Natural Science Foundation of Jilin Province (number 20210101160JC), the Natural Science Foundation of Changchun Normal University and the Research Start-Up Fund of Changchun Normal University. Chen’s work is supported by the Research Start-Up Fund of Henan University and the Postdoctoral Research Start-up Fund of Henan University.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

Appendix A. Proof of Theorems and Propositions

Proposition A1.
The proof of items (1)–(4) and (6) see the Proposition 2.1 in [11]. Here, we focus on the proof of item (5) from the perspective of each component separately, i.e., for k , s = 1 , 2 , we consider
{ E { ( A t X ) ( A t X ) } } k s = i , j = 1 2 E [ ( α k i ( t ) X i ) ( α s j ( t ) X j ) ] .
If k s or i j , α k i ( t ) X i and α s j ( t ) X j are conditionally independent for given X i and X j , and
E ( α k i ( t ) X i ) ( α s j ( t ) X j ) = E E [ ( α k i ( t ) X i ) ( α s j ( t ) X j ) X i , X j ] = α k i α s j E ( X i X j ) .
However, if k = s and i = j , we have
E α k i ( t ) X i 2 = E E ( α k i ( t ) X i ) 2 X i = ( σ k i 2 + α k i 2 ) E ( X i ) 2 + [ α k i ( 1 α k i ) σ k i 2 ] E ( X i ) .
Denote δ k s = 1 if k = s and δ k s = 0 else. Therefore, we have
{ E { ( A t X ) ( A t X ) } } k s = i , j = 1 2 α k i α s j E ( X i X j ) + δ k s i = 1 2 σ k i 2 E ( X i ) 2 + δ k s i = 1 2 [ α k i ( 1 α k i ) σ k i 2 ] E ( X i ) .
Denote V 1 = α i j ( 1 α i j ) 2 × 2 , V 2 = ( σ α i j 2 ) 2 × 2 , U 2 = ( E X 1 2 , E X 2 2 ) and U 1 = ( E X 1 , E X 2 ) . Then E { ( A t X ) ( A t X ) } = A E ( X X ) A + diag V 2 U 2 + diag ( V 1 V 2 ) U 1 . The proof is complete.
Theorem A1.
The proof comprises two parts as follows:
(a). As { X t , t Z } defined by (2) is a Markov process of order one, thus t 1 , , t n Z ,
P ( X t 1 + τ = x 1 , , X t n + τ = x n ) = i = 2 n P ( X t i + τ = x i | X t i 1 + τ = x i 1 ) P ( X t 1 + τ = x 1 ) , τ Z
According to (1), we obtain P ( ϵ 1 , t i + τ = ϵ 1 , ϵ 2 , t i + τ = ϵ 2 ) = P ( ϵ 1 , t i = ϵ 1 , ϵ 2 , t i = ϵ 2 ) .
Since conditional probability function of X t for given X t 1 is derived by its basis that elements, i.e., X 1 t and X 2 t , are conditionally independent for given X 1 , t 1 and X 2 , t 1 , thus we have that
P ( X 1 t = ς 1 , X 2 t = ς 2 X 1 , t 1 = ϑ 1 , X 2 , t 1 = ϑ 2 ) = P ( X 1 t = ς 1 X 1 , t 1 = ϑ 1 , X 2 , t 1 = ϑ 2 ) P ( X 2 t = ς 2 X 1 , t 1 = ϑ 1 , X 2 , t 1 = ϑ 2 , X 1 t = ς 1 ) ,
which implies that P ( X t i + τ = x i X t i 1 + τ = x i 1 ) = P ( X t i = x i X t i 1 = x i 1 ) .
According to (5), we have that X t i + τ = d X t i = d i = 0 ( j = 1 i A t i j ) ϵ t i ( j + 1 ) + ϵ t i . Therefore, P ( X t 1 + τ = x 1 , , X t n + τ = x n ) = P ( X t 1 = x 1 , , X t n = x n ) . Hence, X t is a strictly stationary process.
(b). For all states ς , ϑ , κ t 2 , κ t 3 , , we have
P ( X t = ς X t 1 = ϑ , X t 2 = κ t 2 , ) = P ( X t = ς X t 1 = ϑ ) = P ( ϑ , ς ) ,
where P ( ϑ , ς ) denotes the transition probability from state ϑ to state ς . Thus { X t } is a homogeneous Markov chain.
Denote P ς ϑ n be the n-state transition probability from state ς to state ϑ . Then for given X t 1 , conditional probability function of X t is derived by
P ( X 1 t = ς 1 , X 2 t = ς 2 X 1 , t 1 = ϑ 1 , X 2 , t 1 = ϑ 2 ) = P ( X 1 t = ς 1 X 1 , t 1 = ϑ 1 , X 2 , t 1 = ϑ 2 ) P ( X 2 t = ς 2 X 1 , t 1 = ϑ 1 , X 2 , t 1 = ϑ 2 , X 1 t = ς 1 ) ,
then P τ υ 1 > 0 for all τ , υ N 0 2 . Hence, { X t } is irreducible.
In addition that k step conditional probability distribution P 0 , 0 k is obtained by (5), i.e.,
P 0 , 0 k = P ( X t = 0 X t k = 0 ) = P ( i = 0 k 1 A t i X t = 0 ) i = 1 k P ( ( j = 0 i 1 A t j ) ϵ t = 0 ) P ( ϵ t = 0 ) = P ( i = 0 k 1 A t i X t = 0 ) i = 1 k P ( ( j = 0 i 1 A t j ) m = 0 ϵ t = m ) P ( ϵ t = m ) P ( ϵ t = 0 ) ,
where the first multiplier can be obtained by the similar method of (4) and the second part is obtained by (1). Thus, P 0 , 0 k > 0 with probability one. Hence { X t } is aperiodic.
Thus, { X t } is an ergodic Markov chain. The proof is complete.
Proposition A2.
(1) can be directly obtained by property (2) of Proposition 1, (2) can be directly obtained (1), (3) can be directly obtained by property (5) Proposition 1, and (4) can be directly obtained by property (6) of Proposition 1 k-times. The proof is complete.
Theorem A2.
For proving (1) and (2), we need to illustrate that the conditions of Theorems 4.1.2 and 4.1.3 in [30] hold for the BHRCINAR(1) model.
(a). Let W t ( θ ) = log P θ ( X t | X t 1 ) , then ( θ ) = t = 1 T W t ( θ ) . We observe that θ Θ P θ ( X t | X t 1 ) is measurable with respect to X t and continuous in an open and convex neighbourhood N ( θ 0 ) of θ 0 , then there at least exists a point θ ¯ N ( θ 0 ) such that P θ ( X t | X t 1 ) attains the maximum value at θ ¯ . Hence, E sup θ N ( θ 0 ) W t ( θ ) log E P θ ¯ ( X t | X t 1 ) < . By Theorem 1 and Theorem 4.2.1 in [30], T 1 t = 1 T W t ( θ ) E W t ( θ ) in probability as T .
(b). By Jensen’s inequality,
E ( W t ( θ ) ) E ( W t ( θ 0 ) ) = E log P θ ( X t | X t 1 ) P θ 0 ( X t | X t 1 ) log E P θ ( X t | X t 1 ) P θ 0 ( X t | X t 1 ) = 0 .
Thus, E W t ( θ ) attains a strict local maximum at θ 0 by (A1) and Assumption 3.
Hence, the conditions of Theorem 4.1.2 in [30] are fulfilled, thus part (1) holds.
(c). Note that P θ ( X t | X t 1 ) k = 0 g 1 s = 0 g 2 f 3 ( X 1 t k , X 2 t s ) by (6). Then
(c1). X 1 t = 0 X 2 t = 0 P θ ( X t | X t 1 ) < by (i) of Assumption 4, for θ N ( θ 0 ) .
(c2). X 1 t = 0 X 2 t = 0 P θ ( X t | X t 1 ) / θ i < by (ii) of Assumption 4, for θ i θ N ( θ 0 ) .
(c3). X 1 t = 0 X 2 t = 0 2 P θ ( X t | X t 1 ) / θ i θ j < by (iii) of Assumption 4, for θ i , θ j θ .
(d). Thus, W t ( θ 0 ) θ is well defined and bounded by (c2), then 1 T t = 1 T W t ( θ 0 ) θ p E W t ( θ 0 ) θ by the ergodic theorem, i.e., 1 T ( θ 0 ) θ p E W t ( θ 0 ) θ . According to (1) and (3) in Lemma A3, E W t ( θ 0 ) θ 2 K 1 ( E ( X 1 t 2 + E ( X 2 t ) 2 ) ) < for a suitable constant K 1 . In addition, we obtain the nonsingular matrix Cov W t ( θ 0 ) θ = E W t ( θ 0 ) θ W t ( θ 0 ) θ by E W t ( θ 0 ) θ = 0 and 1 T ( θ 0 ) θ d N ( 0 , I ( θ 0 ) ) , where I ( θ 0 ) = lim T 1 T E ( θ 0 ) θ ( θ 0 ) θ by the martingale central limit theorem and the Cramér-Wold device. Similarly, 2 W t ( θ ) θ i θ j is well defined and continuous by (2) and (4) in Lemma A3, and 2 W t ( θ ) θ i θ j exists and is bounded in N ( θ 0 ) by (c3). Thus, E 2 W t ( θ ) θ θ 2 K 2 ( E ( X 1 t 4 ) + E ( X 2 t 4 ) ) < , for a suitable constant K 2 . For convenience, we denote 2 ( θ ) θ θ = G ( X t , θ ) = g i j ( X t , θ ) , E 2 ( θ ) θ θ = G ( θ ) = g i j ( X t , θ ) , and h X t , θ = g i j ( X t , θ ) E g i j ( X t , θ ) , then E h X t , θ = 0 by the ergodic theorem. Thus, θ N ( θ 0 ) , T 1 t = 1 T h X t , θ 0 in probability. Furthermore, T 1 t = 1 T h X t , θ T 0 in probability, if θ T θ 0 , T .
(f). Using Lemma A3, 3 ln ( θ ) θ i θ j θ k exists, and for suitable constants K and K 3 , there exists H ( X 1 t , X 2 t ) such that 3 ln ( θ ) θ i θ j θ k K H ( X 1 t , X 2 t ) , where E [ H ( X 1 t , X 2 t ) ] < K 3 E ( X 1 t 3 + E ( X 2 t 3 ) ) < . By the Taylor expansion,
0 = ( θ ^ m l ) θ = ( θ 0 ) θ + 2 ( θ ) θ θ ( θ ^ m l θ 0 ) ,
where θ lies in between θ ^ m l and θ 0 . Then
T ( θ ^ m l θ 0 ) = 1 T 2 ( θ ) θ θ 1 1 T ( θ 0 ) θ .
Hence, part (2) holds. The proof is complete.
Lemma A1.
Let g ( s 1 , s 2 , λ ) = λ 1 ( s 1 1 ) + λ 2 ( s 1 2 1 ) + λ 3 ( s 2 1 ) + λ 4 ( s 2 2 1 ) + λ ( s 1 s 2 1 ) , λ = ( λ 1 , λ 2 , λ 3 , λ 4 , λ 5 ) , 1 < s 1 , s 2 < 1 . Then if g ( s 1 , s 2 , λ ) = g ( s 1 , s 2 , λ 0 ) , then λ = λ 0 .
Proof. 
By the assumption, we have
g ( s 1 , s 2 , λ ) s 1 = g ( s 1 , s 2 , λ 0 ) s 1 g ( s 1 , s 2 , λ ) s 2 = g ( s 1 , s 2 , λ 0 ) s 2 2 g ( s 1 , s 2 , λ ) s 1 2 = 2 g ( s 1 , s 2 , λ 0 ) s 1 2 2 g ( s 1 , s 2 , λ ) s 1 2 = 2 g ( s 1 , s 2 , λ 0 ) s 1 2 2 g ( s 1 , s 2 , λ ) s 1 s 2 = 2 g ( s 1 , s 2 , λ 0 ) s 1 s 2 λ 1 = λ 1 0 λ 2 = λ 2 0 λ 3 = λ 3 0 λ 4 = λ 4 0 λ 5 = λ 5 0 .
Lemma A2.
If { X t } is the strictly stationary and ergodic solution of model (2) and Assumptions 1 and 2 hold, then model (2) is identifiable.
Proof. 
We first illustrate that if ϵ ( θ ) = ϵ ( θ 0 ) , then λ k = λ k 0 , k = 1 , 2 , , 5 , which can be obtain be Lemma A1. Since ϵ t = X t A t X t 1 by (2), then
0 = E ( ϵ t ( θ ) ) E ( ϵ t ( θ 0 ) ) = ( A 0 A ) E ( X t ) ,
thus, A 0 = A , i.e., α i j 0 = α i j , i , j = 1 , 2 . Otherwise, we have E ( X t ) = 0 , which contradicts the fact that E ( X i t ) > 0 . Therefore, (2) is identifiability, i.e., if P θ ( X t X t 1 ) = P θ 0 ( X t X t 1 ) , then θ = θ 0 . □
Lemma A3.
Let α ( t ) X be random univariate binomial thinning operator defined by [15] and Y follow Poisson distribution with parameter λ, and we denote P ( Y = k ) = e ( k , λ ) , then, for given t, we have
(1). 
d P ( α ( t ) X = k | X = n ) d α ( t )
= n P ( α ( t ) X = k 1 | X = n 1 ) P ( α ( t ) X = k | X = n 1 ) ;
(2). 
d 2 P ( α ( t ) X = k | X = n ) d ( α ( t ) ) 2 = n ( n 1 ) P ( α ( t ) X = k 2 | X = n 2 )
2 P ( α ( t ) X = k 1 | X = n 2 ) + P ( α ( t ) X = k | X = n 2 ) .
(3). 
d e ( k , λ ) d λ = e ( k 1 , λ ) e ( k , λ ) ;
(4). 
d 2 e ( k , λ ) d λ 2 = e ( k 2 , λ ) 2 e ( k 1 , λ ) + e ( k , λ ) .
The proof of Lemma A3 is obtained directly by the distributions of α ( t ) X and Y.

References

  1. Al-Osh, M.A.; Alzaid, A.A. First-order integer-valued autoregressive process. J. Time Ser. Anal. 1987, 8, 261–275. [Google Scholar]
  2. Scotto, M.G.; Weiß, C.H.; Gouveia, S. Thinning-based models in the analysis of integer-valued time series: A review. Stat. Model. 2015, 15, 590–618. [Google Scholar] [CrossRef]
  3. Weiß, C.H. Thinning operations for modeling time series of counts—A survey. AStA Adv. Stat. Anal. 2008, 92, 319–341. [Google Scholar] [CrossRef]
  4. Weiß, C.H. An Introduction to Discrete-Valued Time Series; JohnWiley & Sons: Chichester, UK, 2018. [Google Scholar]
  5. Zheng, H.; Basawa, I.V.; Datta, S. First-order random coefficient integervalued autoregressive processes. J. Stat. Plan. Inference 2007, 173, 212–229. [Google Scholar] [CrossRef]
  6. Zheng, H.; Basawa, I.V.; Datta, S. Inference for pth-order random coefficient integer-valued autoregressive processes. J. Time Ser. Anal. 2006, 27, 411–440. [Google Scholar] [CrossRef]
  7. Fernández-Fontelo, A.; Fontdecaba, S.; Alba, A.; Puig, P. Integer-valued AR processes with Hermite innovations and time-varying parameters: An application to bovine fallen stock surveillance at a local scale. Stat. Model. 2017, 17, 172–195. [Google Scholar] [CrossRef]
  8. Gorgi, P. Integer-valued autoregressive models with survival probability driven by a stochastic recurrence equation. J. Time Ser. Anal. 2018, 39, 150–171. [Google Scholar] [CrossRef] [Green Version]
  9. Li, H.; Yang, K.; Zhao, S.; Wang, D. First-order random coefficients integer-valued threshold autoregressive processes. AStA Adv. Stat. Anal. 2018, 102, 305–331. [Google Scholar] [CrossRef]
  10. Liu, Z.; Li, Q.; Zhu, F. Random environment binomial thinning integer-valued autoregressive process with Poisson or geometric marginal. Braz. J. Probab. Stat. 2020, 34, 251–272. [Google Scholar] [CrossRef]
  11. Liu, X.; Wang, D.; Deng, D.; Cheng, J.; Lu, F. Maximum likelihood estimation of the DDRCINAR(p) model. Commun. Stat.-Theory Methods 2020. [Google Scholar] [CrossRef]
  12. Zhang, H.; Wang, D.; Zhu, F. Empirical likelihood inference for random coefficient INAR(p) process. J. Time Ser. Anal. 2011, 32, 195–203. [Google Scholar] [CrossRef]
  13. Zheng, H.; Basawa, I.V. First-order observation-driven integer-valued autoregressive processes. Stat. Probab. Lett. 2008, 78, 1–9. [Google Scholar] [CrossRef]
  14. Latour, A. The multivariate GINAR(p) process. Adv. Appl. Probab. 1997, 29, 228–248. [Google Scholar] [CrossRef]
  15. Steutel, F.W.; van Harn, K. Discrete analogues of self-decomposability and stability. Ann. Probab. 1979, 7, 893–899. [Google Scholar] [CrossRef]
  16. Pedeli, X.; Karlis, D. A bivariate INAR(1) processes with application. Stat. Model. 2011, 11, 325–349. [Google Scholar] [CrossRef]
  17. Pedeli, X.; Karlis, D. Some properties of multivariate INAR(1) processes. Comput. Stat. Data Anal. 2013, 67, 213–225. [Google Scholar] [CrossRef]
  18. Scotto, M.G.; Weiß, C.H.; Silva, M.E.; Pereira, I. Bivariate binomial autoregressive models. J. Multivar. Anal. 2014, 125, 233–251. [Google Scholar] [CrossRef]
  19. Popović, P.M. A geometric bivariate time series with different marginal parameters. Stat. Pap. 2016, 57, 731–753. [Google Scholar] [CrossRef]
  20. Popović, P.M. A bivariate INAR(1) model with different thinning parameters. Math. Inform. 2015, 30, 263–280. [Google Scholar] [CrossRef]
  21. Cui, Y.; Zhu, F. A new bivariate integer-valued GARCH model allowing for negative cross-correlation. Test 2018, 27, 428–452. [Google Scholar] [CrossRef]
  22. Cui, Y.; Li, Q.; Zhu, F. Flexible bivariate Poisson integer-valued GARCH model. Ann. Inst. Stat. Math. 2020, 72, 1449–1477. [Google Scholar] [CrossRef]
  23. Qi, X.; Li, Q.; Zhu, F. Modeling time series of count with excess zeros and ones based on INAR(1) model with zero-and-one inflated Poisson innovations. J. Comput. Appl. Math. 2019, 346, 572–590. [Google Scholar] [CrossRef]
  24. Zhu, F. Zero-inflated Poisson and negative binomial integer-valued GARCH models. J. Stat. Plan. Inference 2012, 142, 826–839. [Google Scholar] [CrossRef]
  25. Kemp, C.D.; Papageorgiou, H. Bivariate hermite distributions. Sankhya-Ser. A 1982, 44, 269–280. [Google Scholar]
  26. Kemp, C.D.; Kemp, A.W. Some properties of the Hermite distribution. Biometrila 1965, 52, 381. [Google Scholar]
  27. Zhu, F.; Liu, S.; Shi, L. Local influence analysis for Poisson autoregression with an application to stock transaction data. Stat. Neerl. 2016, 70, 4–25. [Google Scholar] [CrossRef]
  28. Zhu, F.; Shi, L.; Liu, S. Influence diagnostics in log-linear integer-valued GARCH models. AStA Adv. Stat. Anal. 2015, 99, 311–335. [Google Scholar] [CrossRef]
  29. Creal, D.; Koopman, S.J.; Lucas, A. Generalized autoregressive score models with applications. J. Appl. Econom. 2013, 28, 777–795. [Google Scholar] [CrossRef] [Green Version]
  30. Amemiya, T. Advanced Econometrics; Harvard University Press: Cambridge, UK, 1985. [Google Scholar]
Figure 1. (a). Path of { X 1 t } (b). Path of { X 2 t } (c). CCF of { X 1 t } and { X 2 t } .
Figure 1. (a). Path of { X 1 t } (b). Path of { X 2 t } (c). CCF of { X 1 t } and { X 2 t } .
Symmetry 14 00039 g001
Figure 2. Variation of parameter for the non-diagonal BHRCINAR(1) model.
Figure 2. Variation of parameter for the non-diagonal BHRCINAR(1) model.
Symmetry 14 00039 g002
Figure 3. Variation of parameter set I for the diagonal BHRCINAR(1) model.
Figure 3. Variation of parameter set I for the diagonal BHRCINAR(1) model.
Symmetry 14 00039 g003
Figure 4. Boxplots of the bias of the CML estimates for the non-diagonal BHRCINAR(1) model.
Figure 4. Boxplots of the bias of the CML estimates for the non-diagonal BHRCINAR(1) model.
Symmetry 14 00039 g004
Figure 5. Boxplots of the bias of parameter set I for the diagonal BHRCINAR(1) model.
Figure 5. Boxplots of the bias of parameter set I for the diagonal BHRCINAR(1) model.
Symmetry 14 00039 g005
Figure 6. (a). Histogram of { X 1 t } (b). Histogram of { X 2 t } .
Figure 6. (a). Histogram of { X 1 t } (b). Histogram of { X 2 t } .
Symmetry 14 00039 g006
Figure 7. (a). Path of { X 1 t } (b). Path of { X 2 t } (c). CCF of { X 1 t } and { X 2 t } .
Figure 7. (a). Path of { X 1 t } (b). Path of { X 2 t } (c). CCF of { X 1 t } and { X 2 t } .
Symmetry 14 00039 g007
Figure 8. (a). ACF of { X 1 t } (b). ACF of { X 2 t } (c). PACF of { X 1 t } (d). PACF of { X 2 t } .
Figure 8. (a). ACF of { X 1 t } (b). ACF of { X 2 t } (c). PACF of { X 1 t } (d). PACF of { X 2 t } .
Symmetry 14 00039 g008
Figure 9. Pearson residual analysis: (a). standard normal quantiles of series X 1 t (b). standard normal quantiles of series X 2 t (c). ACF of the residual of series X 1 t (d). ACF of the residual of series X 2 t .
Figure 9. Pearson residual analysis: (a). standard normal quantiles of series X 1 t (b). standard normal quantiles of series X 2 t (c). ACF of the residual of series X 1 t (d). ACF of the residual of series X 2 t .
Symmetry 14 00039 g009
Figure 10. (a). Histogram of frogery (b). Histogram of fraud.
Figure 10. (a). Histogram of frogery (b). Histogram of fraud.
Symmetry 14 00039 g010
Figure 11. (a). Path of forgery (b). path of fraud (c). CCF of forgery and fraud.
Figure 11. (a). Path of forgery (b). path of fraud (c). CCF of forgery and fraud.
Symmetry 14 00039 g011
Figure 12. (a). ACF of series forgery (b). ACF of series fraud (c). PACF of forgery (d). PACF of fraud.
Figure 12. (a). ACF of series forgery (b). ACF of series fraud (c). PACF of forgery (d). PACF of fraud.
Symmetry 14 00039 g012
Figure 13. Pearson residual analysis: (a). standard normal quantiles of series forgery (b). standard normal quantiles of series fraud (c). ACF of the residual of series forgery (d). ACF of the residual of series fraud.
Figure 13. Pearson residual analysis: (a). standard normal quantiles of series forgery (b). standard normal quantiles of series fraud (c). ACF of the residual of series forgery (d). ACF of the residual of series fraud.
Symmetry 14 00039 g013
Table 1. Results for the non-diagonal BHRCINAR(1) model.
Table 1. Results for the non-diagonal BHRCINAR(1) model.
Size α 11 α 12 α 21 α 22 λ 1 λ 2 λ 3 λ 4 λ 5
( 0.4 , 0.3 , 0.3 , 0.4 , 0.6 , 0.8 , 0.8 , 0.6 , 0.6 )
50CV0.00850.00680.01220.00580.09400.04550.04350.11370.0905
SD0.00340.00200.00380.00230.05890.03730.03560.07110.0563
Bias 0.0002 0.00020.00030.00010.02680.01850.01910.02530.0224
MADE0.00110.00080.00120.00090.02830.01970.02060.02630.0233
100CV0.00720.00610.00480.00430.05090.02670.04980.04350.0300
SD0.00290.00180.00140.00170.03120.02160.04080.02660.0183
Bias 0.0001 0.0001 0.0001 0.0002 0.01220.01200.01890.01120.0112
MADE0.00090.00070.00060.00060.01280.01280.01920.01180.0115
300CV0.00180.00330.00200.00220.01570.00830.01070.00540.0122
SD0.00070.00100.00060.00090.00950.00660.00860.00330.0074
Bias 0.0000 0.0001 0.00010.00000.00340.00370.00380.00130.0036
MADE0.00030.00040.00030.00040.00460.00430.00460.00220.0042
Table 2. Results for the diagonal BHRCINAR(1) model.
Table 2. Results for the diagonal BHRCINAR(1) model.
Size α 11 α 22 λ 1 λ 2 λ 3 λ 4 λ 5
I = ( 0.4 , 0.3 , 0.8 , 0.6 , 0.6 , 0.8 , 0.4 )
50CV0.00960.01010.10040.09250.07010.06260.0760
SD0.00380.00300.06270.07670.05750.03860.0467
Bias 0.0002 0.00020.02410.02900.01960.01680.0150
MADE0.00130.00110.02520.03040.02130.01830.0164
100CV0.00740.01290.11520.05210.04920.07740.0526
SD0.00300.00390.07190.04250.04010.04760.0323
Bias 0.0001 0.0006 0.02400.01620.01610.01540.0136
MADE0.00130.00170.02510.01810.01780.01750.0148
300CV0.00460.00490.02730.00540.00680.00970.0069
SD0.00190.00150.01650.00430.00540.00580.0042
Bias0.00000.00000.00330.00210.00190.00180.0023
MADE0.00060.00060.00410.00280.00290.00310.0028
II = ( 0.2 , 0.5 , 2 , 1 , 1 , 1.5 , 1 )
50CV0.01440.00560.03270.04950.05570.08430.1408
SD0.00290.00280.06620.05080.05700.12890.1482
Bias0.00050.00070.02390.02580.02250.02920.0526
MADE0.00110.00090.03000.02860.02570.03450.0550
100CV0.01370.00440.02330.05540.04980.03020.1670
SD0.00270.00220.04700.05670.05080.04590.1750
Bias0.00010.00050.01870.02350.02010.01840.0478
MADE0.00080.00070.02620.02710.02380.02110.0510
300CV0.00350.00110.01090.04180.04550.00790.0461
SD0.00070.00060.02200.04240.04620.01180.0469
Bias0.00000.00000.00930.01330.01570.00510.0157
MADE0.00030.00030.01240.01410.01680.00680.0166
III = ( 0.35 , 0.35 , 2 , 1 , 1 , 1.5 , 1 )
50CV0.00700.01140.02150.04410.06100.03650.0731
SD0.00250.00400.04340.04500.06230.05550.0749
Bias 0.0002 0.00110.01450.01910.02140.02100.0250
MADE0.00120.00170.01810.02080.02220.02370.0259
100CV0.01840.01420.01450.02430.05180.03810.0376
SD0.00230.00400.04340.04500.06230.05550.0749
Bias0.0001 0.0003 0.01280.01110.02040.02060.0199
MADE0.00110.00170.01810.02080.02220.02370.0259
300CV0.00220.00510.00950.01450.01630.00890.0280
SD0.00080.00180.01900.01450.01640.01350.0282
Bias 0.0000 0.00000.00740.00470.00880.00640.0096
MADE0.00040.00070.00920.00590.00930.00740.0104
Table 3. Estimates for the artificial series.
Table 3. Estimates for the artificial series.
BHRCINAR(1)Full BINAR(1)-NBFull BINAR(1)-BP
Estimatese Estimatese Estimatese
α ^ 11 0.14840.1133 α ^ 11 0.09740.0832 α ^ 11 0.08850.0939
α ^ 12 0.03740.0611 α ^ 12 0.06810.0513 α ^ 12 0.09970.0568
α ^ 21 0.04150.0802 α ^ 21 0.13600.0626 α ^ 21 0.09740.0727
α ^ 22 0.21950.0985 α ^ 22 0.11920.0676 α ^ 22 0.16910.0823
λ ^ 1 1.21440.4228
λ ^ 2 0.15470.1236
λ ^ 3 0.94460.5877 λ ^ 1 1.68300.2179 λ ^ 1 1.93890.0434
λ ^ 4 0.46190.2075 λ ^ 2 2.10670.2321 λ ^ 2 2.12710.0360
λ ^ 5 0.48290.2485 β ^ 0.36980.1566 ϕ ^ 0.12110.3459
AIC1571.7585 1588.6233 1580.0750
RMSE1.6801 2.6134 2.6175
Log Lik 774.8793 787.3116 783.0375
Table 4. Estimates for the crime datasets.
Table 4. Estimates for the crime datasets.
BHRCINAR(1)Full BINAR(1)-NBFull BINAR(1)-BP
Estimatese Estimatese Estimatese
α ^ 11 0.17890.1083 α ^ 11 0.10480.0724 α ^ 11 0.08520.0833
α ^ 12 0.06110.0493 α ^ 12 0.12280.0386 α ^ 12 0.12270.0404
α ^ 21 0.05650.1101 α ^ 21 0.12870.0893 α ^ 21 0.02590.1134
α ^ 22 0.10390.0841 α ^ 22 0.07320.0588 α ^ 22 0.15630.0828
λ ^ 1 0.15220.1494
λ ^ 2 0.28000.0807
λ ^ 3 1.17930.6924 λ ^ 1 0.82030.1690 λ ^ 1 1.09590.0233
λ ^ 4 0.93270.3188 λ ^ 2 3.09900.2919 λ ^ 2 2.82900.0461
λ ^ 5 0.46710.1694 β ^ 0.33290.1464 ϕ ^ 0.25000.9350
AIC1156.8042 1207.4668 1181.2062
RMSE1.5790 3.0299 3.0698
Log Lik 567.4021 596.7334 583.6031
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Q.; Chen, H.; Liu, X. A New Bivariate Random Coefficient INAR(1) Model with Applications. Symmetry 2022, 14, 39. https://doi.org/10.3390/sym14010039

AMA Style

Li Q, Chen H, Liu X. A New Bivariate Random Coefficient INAR(1) Model with Applications. Symmetry. 2022; 14(1):39. https://doi.org/10.3390/sym14010039

Chicago/Turabian Style

Li, Qi, Huaping Chen, and Xiufang Liu. 2022. "A New Bivariate Random Coefficient INAR(1) Model with Applications" Symmetry 14, no. 1: 39. https://doi.org/10.3390/sym14010039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop