Next Article in Journal
A Family of New Generating Functions for the Chebyshev Polynomials, Based on Works by Laplace, Lagrange and Euler
Previous Article in Journal
The Application of the Modified Lindstedt–Poincaré Method to Solve the Nonlinear Vibration Problem of Exponentially Graded Laminated Plates on Elastic Foundations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Grouped Change-Points Detection and Estimation in Panel Data

The School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(5), 750; https://doi.org/10.3390/math12050750
Submission received: 12 December 2023 / Revised: 18 February 2024 / Accepted: 20 February 2024 / Published: 1 March 2024
(This article belongs to the Section Probability and Statistics)

Abstract

:
The change-points in panel data can be obstacles for fitting models; thus, detecting change-points accurately before modeling is crucial. Extant methods often either assume that all panels share the common change-points or that grouped panels have the same unknown parameters. However, the problem of different change-points and model parameters between panels has not been solved. To deal with this problem, a novel approach is proposed here to simultaneously detect and estimate the grouped change-points precisely by employing an iterative algorithm and the penalty cost function. Some numerical experiments and case studies are utilized to demonstrate the superior performance of the proposed method in grouping the panels, and estimating the number and positions of change-points.

1. Introduction

Due to the influence of some factors, the structure (mean, covariance, etc.) of data may change at some times or places. Thus, testing the structural stability first before modeling is crucial. Several studies on detecting the change in data structure and estimating the position of the change-point can be traced back to 1954. Ref. [1] first considered the problem of detecting the change-point in sample parameter value. Further, the correlated problem has been studied extensively by scholars since the 1990s. Ref. [2] studied the problem about the mean shift in a linear process and used the least-squares method to estimate the change-point. Ref. [3] proposed a method for detecting and estimating multiple change-points, and proved the corresponding theoretical properties. Another area attracting scholarly attention is the problem about the change-point in time-series models. Ref. [4] applied the minimum description length (MDL) criterion to estimate change-points in the piecewise auto-regressive (AR) processes. Ref. [5] constructed a cumulative sum (CUSUM) statistic to identify structural changes in multivariate time-series models. Ref. [6] considered the problem of structural change in an autoregressive model and proposed a method by using group lasso to estimate the change-point. Ref. [7] proposed a new test using the eigensystem to ascertain change-points. Recent works have discussed the high-dimensional case (dimension larger than the sample size). Ref. [8] proposed the sparsified binary segmentation (SBS) algorithm for high-dimensional time series change-point detection problem. Ref. [9] considered high-dimensional data and used a two-stage approach to estimate the change-point; these stages were dimensionality reduction and the construction of a CUSUM statistic. All these studies are offline methods, which analyze change-points based on historical dataset. However, in some fields, the online approaches for monitoring the changes in the system are significantly important since the observations are obtained sequentially and timely decisions are needed. Ref. [10] proposed a method for sequentially detecting change-points using the likelihood ratio test.
Despite the good performance of the aforementioned methods, their limitation is that they consider a single sequence of data. Meanwhile, panel data, which are a two-dimensional collection of time-series and cross-sectional data, are common in practical applications, such as in economics and finance. Some changes may happen in the panel data which require detection and estimation before modeling due to the effects of some factors. The aforementioned methods could be utilized to detect or estimate the change-points in a single sequence. However, it will benefit the detection and estimation of change-points by integrating the information of multiple sequences, if the change-points between these sequences are correlated. Some works do consider the estimation of change-points in panel data. Refs. [11,12] first studied the problem of change-points in N sequences and proposed an estimator by using the maximum likelihood method. Ref. [13] proposed a method to estimate change-points by using the least-squares and quasi-maximum likelihood methods, but it is only for the case where there are common change-points between panels. Ref. [14] used common correlated effects estimators for heterogeneous panels change-points estimation. Ref. [15] compared ordinary least squares and first difference, and found that the first difference estimator is robust to the case of stationary or nonstationary regressors and error term. Ref. [16] proposed a new CUSUM estimator for common mean change-points in panel data, which performed better than the least-squares estimator proposed by Ref. [13]. Ref. [17] considered dependent and nonstationary panels and developed a novel estimator. However, these methods assume that there is a common change-point between panels. It is a very strong assumption and some evidence shows that this assumption does not hold in many cases. Ref. [18] allowed different change-points between panels and proposed a grouping method. However, it can only estimate the most recent change-point. Considering linear panel data models and allowing the group structure changes, Ref. [19] proposed a least-squares method and an iterative estimation approach to estimate change-points, group membership, and coefficients simultaneously. In addition, some scholars have studied the problem of change-point detection in panel data. Ref. [20] developed a fluctuation test and Wald statistics for detecting change-points in panel data. Based on the CUSUM method, Ref. [21] proposed a new statistic and established the corresponding asymptotic distribution to detect common change-points. Ref. [22] proposed a ratio type test statistic for change-point detection, which was for fixed and relatively small panel size. Ref. [23] considered smooth structural changes and developed two consistent tests. An asymptotic method and two new bootstrap tests were proposed by Ref. [24] for a sequential change-point of panel data. Ref. [25] proposed a general approach for testing change-points with large number of panels. Based on the cumulative sum of ordinary least-squares residuals, Ref. [26] proposed a new method for testing whether there are common change-points in heterogeneous panel data. All of the above methods regard detection and estimation as two problems and study them separately. Recently, some authors have studied new estimators to achieve simultaneous detection and estimation in a single step. Most were lasso-type methods and Ref. [27] reviewed these. Assuming each panel is a linear model and has the same coefficient, Ref. [28] proposed adaptive group fused lasso (AGFL) to detect and estimate common change-points. Then, relaxing the assumption of common change-points and allowing differences in the number and location of change-points between groups, Ref. [29] developed grouped AGFL (GAGFL) for heterogeneous structural changes in panel data. However, the lasso-type methods took a long time to solve because the objective function has an absolute value function and the parameters need to be tuned. All of them required that the model parameters in the group must be the same, but many situations could not be satisfied in practice.
In this paper, we study the mean change-point problem of panel data. We further relax the model assumptions to allow for different change-points and model parameters between panels. A new statistic and an iterative algorithm are proposed to simultaneously detect and estimate change-points in panel data. Although it is equivalent to the G-median problem and is NP-hard, we use an open-source solver, which has a low computational cost and works well. A lot of numerical experiments and practical applications demonstrate the good performance of the new method.
The remainder of this article is organized as follows. In Section 2, the problem of grouped change-points in panel data is presented, and a new method for detecting and estimating the change-points is proposed. Some numerical experiments are performed to demonstrate the performance of the new method in Section 3. In Section 4, we apply the new method to the stock and breast cancer datasets. The conclusions and remarks are discussed in Section 5.

2. Methodology

We first consider the simple situation with one change-point in each sequence of panel data but the locations of the change-point in each sequence could be different. Let { x i , t } 1 t T be the ith sequence of panel data for 1 i N and define y i , t as
y i , t = x i , t , 0 < t t i x i , t + u i , t i < t T .
where t i is the true location of the change-point, u i is a nonzero constant, and for each i, { x i , t } 1 t T is a time series. Define
x i , t = j = 0 f i , j ϵ t j ,
where { ϵ t } is a sequence of independent variables with zero-mean and finite variance, such that j = 0 j | f i , j | < . Assume the panels are independent of each other, and the N panels can be divided into G groups, each of which has the same change-point. Define the groups as I 1 : G = { I 1 , , I G } , I g { 1 , , N } , and each group does not intersect and merges into full set { 1 , , N } . The number of elements in group i is N i and i = 1 G N i = N . Denote the true change-points as t 1 : G = ( t 1 , , t G ) ; that is, for all series i I g , the change-point is located at t g . For s t , the set of observations for panel i from time s to time t is denoted as y i , s : t = ( y i , s , . . . , y i , t ) . We are interested in detecting the change-point for the panel data, and we use a minimum penalized cost approach to solve this.
First, we describe the method for univariate time series. For panel i, the penalty cost function is defined as
Q i ( τ ) = C ( y i , 1 : τ ) + C ( y i , τ + 1 : T ) + β , τ = 1 , , T 1 C ( y i , 1 : T ) , τ = 0 ,
where β > 0 is a parameter. Here, we take β = O p ( log T ) and
C ( y i , s : t ) = min θ j = s t γ ( y i , j ; θ ) ,
where γ ( y i , j ; θ ) = ( y i , j θ ) 2 is the square loss function and θ is a segment-specific location parameter. The estimator is
τ ^ = arg min τ Q i ( τ ) .
τ ^ = 0 means there is no change-point; thus, the method can simultaneously detect and estimate change-point.
Then, we extend this method to panel data. If G is known, the statistic of panel data is defined as
Q = min I 1 : G , t 1 : G g = 1 G i I g Q i ( t g ) ,
I ^ 1 : G , t ^ 1 : G = arg min I 1 : G , t 1 : G g = 1 G i I g Q i ( t g )
To solve, consider exchanging Q as
Q = min S i = 1 N min t S Q i ( t ) ,
where S { 0 , 1 , 2 , , T 1 } and | S | = G . According to [30], the problem is equivalent to the G-median problem and can be solved as the integer programming problem. Let
ξ i , t = 1 , if series i has a change-point at time t , 0 , otherwise .
ν t = 1 , it there is a change-point in any series at time t , 0 , otherwise .
Thus,
min i = 1 N t = 0 T 1 Q i ( t ) ξ i , t
s . t . t = 0 T 1 ξ i , t = 1 , i
ξ i , t ν t , i , t
t = 0 T 1 ν t = G .
Many methods are available for solving integer programming problems, such as branch and bound, and cutting-plane algorithms. SCIP is an open-source solver and is used here for a fast solution. Although it is not guaranteed to find the global optimal solution, we find it can empirically lead to good estimates of the change-point in Section 3.
In practice, G is unknown. We focus on using the MDL criterion to determine G (Ref. [29] used Bayesian information criterion to determine G, but it needed to estimate a parameter in advance). Using the MDL, the number of choices of the G change-points is approximately T G and each N time series can choose which G change-points to have, which gives G N possible choices [18]. Thus, define
Q G = Q + N log 2 G + G log 2 T ,
and
G ^ = min G { 1 , , N } Q G .
This can be solved by a traversal algorithm. Given G = 1 , , N , Q is calculated, and then Q G is calculated. Choose G that minimizes Q G as our estimate.
Of course, the new method can be extended to multiple change-points problem. If some panels have multiple change-points, following [31], the penalty cost is defined as
Q ( y 1 : T ; τ 1 : k ) = i = 0 k C ( y τ i + 1 : τ i + 1 ) + β k ,
where τ 0 = 0 and τ k + 1 = T . If N > 1 , define the minimum cost for segmenting series i as
Q i = min τ i , m i Q ( y i , ( 1 : T ) ; τ i ) = min τ i , m i { j = 0 m i C ( y i , ( τ i , j + 1 : τ i , j + 1 ) ) + β m i } ,
and the estimator is
τ ^ i , m i ^ = arg min τ i , m i { j = 0 m i C ( y i , ( τ i , j + 1 : τ i , j + 1 ) ) + β m i } .
Using the binary segmentation method [32] and the ruptures package in Python, we can simultaneously obtain the number and locations of change-points. For panel data, define the groups as I 1 : G = { I 1 , , I G } ; the number of change-points for each group is m g and set of change-points for each group is t g , where 1 g G and t g = { t g , 1 , , t g , m g } . Thus, if G is known, it is natural to define
Q = min I 1 : G , t 1 , , t G g = 1 G i I g Q ( y i , ( 1 : T ) ; t g ) .
To solve this model, an iterative algorithm (Algorithm 1) is proposed. In Section 3.2, we show the convergence rate of the algorithm. For group I ^ g , the estimate of the change-points within the group is defined as
t ^ g , m ^ g = arg min t g , m g i I ^ g Q ( y i , ( 1 : T ) ; t g ) .
Algorithm 1: Iterative Algorithm with G is known.
Input:
  •    Panel data { y i , t } 1 t T for i = 1 , , N ;
  •    The measure of fit γ ( y i , j ; θ ) depends on the data;
  •    Number of iterations p;
  •    Number of group G;
Output:
  •     t g ( s ) for g = 1 , , G ;
  •     I 1 : G ( s + 1 ) ;
 1 
initialize s = 0 ;
 2 
calculate the initial grouping result I 1 : G ( 0 ) , assuming each time-series data has only one change-point;
 3 
repeat
 4 
  According to the grouping, all the change-points in each group are calculated to obtain the set of change-points t 1 ( 0 ) , , t G ( 0 ) ,
t g ( s ) = arg min t g i I g i ( s ) Q ( y i , ( 1 : T ) ; t g ) ;
 5 
  The grouping is redetermined according to the set of change-points,
g i ( s + 1 ) = arg min g { 1 , , G } Q ( y i , ( 1 : T ) ; t g ( s ) ) ,
  then can obtain I 1 : G ( s + 1 ) ;
 6 
  Set s = s + 1;
 7 
until s > p or t g ( s ) = t g ( s 1 ) for g = 1 , , G and g i ( s + 1 ) = g i ( s ) for i = 1 , , N ;
If G is unknown, add a penalty to G: Q G = Q + N log 2 G + g = 1 G ( log 2 T + m g log 2 T ) . This uses the MDL criterion: for each group, the number of change-points m g has T possible choices and the locations of change-points have approximately T m g possible choices. In addition, each N time series can choose which G change-points to have, resulting in G N possible choices. The solution procedure is shown in Algorithm 2.
Algorithm 2: Iterative Algorithm with G is unknown.
Input:
  •    Panel data { y i , t } 1 t T for i = 1 , , N ;
  •    The measure of fit γ ( y i , j ; θ ) depends on the data;
  •    Number of iterations p;
Output:
  •     G ^ ;
  •     t g ( s ) for g = 1 , , G ^ ;
  •     I 1 : G ^ ( s + 1 ) ;
 1 
for  G = 1 , 2 , 3 , 4 , 5  do
 2 
  calculate t g ( s ) for g = 1 , , G and I 1 : G ( s + 1 ) via Algorithm 1;
 3 
  calculate Q G = Q + N log 2 G + g = 1 G ( log 2 T + m ^ g log 2 T ) .
 4 
end
 5 
G ^ = arg min G Q G .
Finally, we give the consistency of the number and location of the change-points in each group under the condition of correct grouping. Using the binary segmentation method, each estimated change-point is defined as τ ^ , and the true change-point is τ 0 . We shall prove the following theorem:
Theorem 1.
Assuming the change in the mean is bounded, for large T and fixed N, we have
| r ^ r 0 | = O p ( T 1 / 2 ) ,
where r 0 = τ 0 / T .
Proof. 
The binary segmentation method searches the change-point that lowers the sum of costs. Define
S i ( τ ) = i = 1 τ ( y i , j y ¯ 1 ) 2 + i = τ + 1 T ( y i , j y ¯ 2 ) 2 ,
then for group g,
τ ^ = arg min τ i = 1 N g S i ( τ ) .
Let
U ( τ ) = 1 N g T i = 1 N g S i ( τ ) .
According to [13] (Lemmas A.1 and A.2), for fixed N g , we have
sup 1 τ T | U ( τ ) E U ( τ ) | = O p ( T 1 / 2 ) ,
and
E U ( τ ) E U ( τ 0 ) C | τ τ 0 | / T ,
where C > 0 . Then, we have
U ( τ ) U ( τ 0 ) = U ( τ ) E U ( τ ) [ U ( τ 0 ) E U ( τ 0 ) ] + E U ( τ ) E U ( τ 0 ) 2 sup 1 j T | U ( j ) E U ( j ) | + E U ( τ ) E U ( τ 0 ) 2 sup 1 j T | U ( j ) E U ( j ) | + C | τ τ 0 | / T
The above inequality holds for each τ [ 1 , T ] . Of course, it holds for τ ^ . From U ( τ ^ ) U ( τ 0 ) 0 , we can obtain
| τ τ 0 | / T C 1 2 sup 1 j T | U ( j ) E U ( j ) | ,
so
| r ^ r 0 | = O p ( T 1 / 2 )
Following Ref. [33], take β = 4 C ( ϵ ) log T , where C ( ϵ ) < ; we have the following theorem:
Theorem 2.
For large T and fixed N, we have
P ( m ^ g = m g 0 ) 1 , g = 1 , 2 , , G ,
where m g 0 is the true number of change-point for group g.
Proof. 
Following Ref. [33], for each panel, we can estimate the number and position of change-points by minimizing the penalized cost function and
P ( m ^ i = m i 0 ) 1 , i = 1 , , N ,
where m i 0 is the true number of change-point for panel i.
Specifically, for panel i, define
Q i ( τ ) = k = 1 m i + 1 τ = τ k 1 + 1 τ k ( y i , τ θ i , k ) 2 ,
where θ i , k = y ¯ i ( τ k 1 , τ k ) = 1 τ k τ k 1 τ = τ k 1 + 1 τ k y i , τ . Then
( τ ^ , m ^ i ) = arg min m i arg min τ 1 T { Q i ( τ ) + β m i } .
Following Ref. [33], define
J i ( τ ) = 1 T ( Q i ( τ ) Q i ( τ 0 ) ) ,
K i ( τ ) = 1 T k = 1 m i + 1 τ = τ k 1 + 1 τ k ( E y i , τ E θ i , k ) 2 ,
V i ( τ ) = 1 T k = 1 m i + 1 ( τ = τ k 1 0 + 1 τ k 0 x i , τ ) 2 τ k 0 τ k 1 0 ( τ = τ k 1 + 1 τ k x i , τ ) 2 τ k τ k 1 ,
W i ( τ ) = 1 2 T k = 1 m i + 1 τ = τ k 1 0 + 1 τ k 0 x i , τ μ k 0 τ = τ k 1 + 1 τ k x i , τ E θ i , k ,
where τ 0 is the true change-points and μ k 0 is the true mean of segment k. Then J i ( τ ) = K i ( τ ) + V i ( τ ) + W i ( τ ) . According to [33] (Theorem 9 and its proof), we have
1 T { Q i ( τ ^ ) + β m ^ i } 1 T { Q i ( τ 0 ) + β m i 0 } ,
K i ( τ ^ ) + V i ( τ ^ ) + W i ( τ ^ ) + β T ( m ^ i m i 0 ) 0 ,
and for any 0 m T and m m i 0 ,
P ( m i ^ = m ) P ( K i ( τ ^ ) + V i ( τ ^ ) + W i ( τ ^ ) + β T ( m m i 0 ) 0 ) P ( min τ { K i ( τ ) + V i ( τ ) + W i ( τ ) + β T ( m m i 0 ) 0 ) 0 , T .
For group g, define
( τ ^ , m ^ g ) = arg min m g arg min τ 1 T i I g { Q i ( τ ) + β m g } .
Then we have
1 T i I g { Q i ( τ ^ ) + β m ^ g } 1 T i I g { Q i ( τ 0 ) + β m g 0 } ,
i I g { K i ( τ ^ ) + V i ( τ ^ ) + W i ( τ ^ ) + β T ( m ^ g m g 0 ) } 0 ,
and for any 0 m T and m m g 0 ,
P ( m g ^ = m ) P ( i I g { K i ( τ ^ ) + V i ( τ ^ ) + W i ( τ ^ ) + β T ( m m g 0 ) } 0 ) i I g P ( K i ( τ ^ ) + V i ( τ ^ ) + W i ( τ ^ ) + β T ( m m g 0 ) 0 ) i I g P ( min τ { K i ( τ ) + V i ( τ ) + W i ( τ ) + β T ( m m g 0 ) 0 ) .
P ( m g ^ = m ) 0 as T because N g is fixed. □

3. Numerical Experiments

3.1. Evaluation Criteria

To evaluate the estimation effect of the new method, three types of evaluation indicators are used. First, to determine the group of G, we perform 1000 replications, and the empirical probability is defined as
P ( G = i ) = c i / 1000 ,
where c i is the number of replications in which the statistic minimizes at G = i .
Then, to obtain the accuracy of the grouping, we use the set coverage (D), which is defined as
D j = 1 | I j I ^ j | | I j | | I ^ j | ,
where j = 1 , 2 , , G and D = ( D 1 + . . . + D G ) / G .
Finally, for the accuracy of the location, we use the root mean square error (RMSE) for one change-point and define it as
R M S E = 1 1000 l = 1 1000 g = 1 G ( t g t ^ l , g ) 2 / G ,
where t ^ l , g is the estimation of the change-point position of group g obtained by the lth replication.
For multiple change-points, we use the Hausdorff distance (HD) and frequency of correct estimation of the number of change-points (F). We define
H D ( t ^ g , t g ) = max { D ( t ^ g , t g ) , D ( t g , t ^ g ) } ,
where D ( A , B ) = sup b B inf a A | a b | for any set A and B.

3.2. Detection and Estimation

To illustrate the superiority of the new method, we consider three time-series models for simulation and compare the new method with the least-squares estimator (LSE) with the sample size T of 80, 100, and 120, and the panel number N of 100 and 120. The number of each group N 1 : N 2 : N 3 = 4 : 3 : 3 . Here, we take β = log T . In each case, 1000 replications are carried out to calculate the mean value of evaluation indexes, and the final simulation results are obtained.
Following [13,34], if there are s common change-points, the statistic can be defined as
t ^ 1 : s = arg min t 1 : s S S R ( t 1 : s ) ,
S S R ( t 1 : s ) = i = 1 N S i T ( t 1 : s ) ,
S i T ( t 1 : s ) = t = 1 t 1 ( y i , t y ¯ i , 1 ) 2 + t = t 1 + 1 t 2 ( y i , t y ¯ i , 2 ) 2 + . . . + t = t s + 1 T ( y i , t y ¯ i , s + 1 ) 2 ,
where y ¯ i , j = 1 t j t j 1 t = t j 1 + 1 t j y i , t for j { 1 , 2 , . . . , s + 1 } .
First, we consider the AR(1) model, and define
x i , t = 0.1 x i , t 1 + ε i , t , i I 1 0.2 x i , t 1 + ε i , t , i I 2 0.15 x i , t 1 + ε i , t , i I 3 ,
and
y i , t = x i , t , i I 1 , 0 < t t 1 x i , t + u i , i I 1 , t 1 < t T x i , t , i I 2 , 0 < t t 2 x i , t + u i , i I 2 , t 2 < t T x i , t , i I 3 , 0 < t t 3 x i , t + u i , i I 3 , t 3 < t T ,
where ε i , t N ( 0 , 1 ) , u i U ( 0.5 , 1 ) for i = 1 , 2 , , n , and the true change-point t 1 = 0.5 T , t 2 = 0.65 T , and t 3 = 0.35 T .
Table 1 presents the empirical probability of the number of groups G. The true number of groups can be correctly estimated using MDL. As N increases, the empirical probability of correct judgment increases. To better illustrate this rule, we illustrate the empirical probability of correct judgment in Figure 1 by fixing T.
For a given G, Table 2 shows the estimation results of the new method and LSE. The new method always performs better than LSE. Figure 2 presents the curve of D and RMSE with panel number N and time-series length T. In Figure 2, at fixed N, D decreases as T increases. This indicates that the effect of grouping is better. Then, when we fix T, the RMSE becomes smaller with N increases, which indicates that the estimate becomes more accurate.
However, Table 1 shows that there is a small probability that G ^ is greater than G. When this happens, say G ^ = 4 , the three change-points can still be accurately estimated, and the fourth group I ^ 4 consists of individual elements from the I 1 , I 2 , and I 3 . Table 3 shows the RMSE of the estimate in this case, where we only consider the RMSE of three change-points. The estimation of the change-point can still achieve good results when the group number G is misestimated. However, it is worse than the effect when the number of group G is estimated correctly.
Then, we consider the MA(2) model
x i , t = ε i , t + 0.3 ε i , t 1 0.1 ε i , t 2 , i I 1 ε i , t + 0.4 ε i , t 1 0.3 ε i , t 2 , i I 2 ε i , t + 0.3 ε i , t 1 0.2 ε i , t 2 , i I 3 .
and
y i , t = x i , t , i I 1 , 0 < t t 1 x i , t + u i , i I 1 , t 1 < t T x i , t , i I 2 , 0 < t t 2 x i , t + u i , i I 2 , t 2 < t T x i , t , i I 3 , 0 < t T .
where ε i , t N ( 0 , 1 ) , u i U ( 0.5 , 1 ) for i = 1 , 2 , , n , and the true change-point t 1 = 0.5 T and t 2 = 0.65 T .
Table 4 shows the empirical probability of G taking 1 , 2 , 3 , 4 , and 5 in 1000 replications. It demonstrates that G has more than 90 percent probability of being estimated correctly. Figure 3 shows that the empirical probability of correct group selection increases as N increases.
From Table 4, G is chosen as 3. Given G ^ = 3 , we use the SCIP Solver to estimate the change-points, as shown in Table 5. The RMSE of the new method is smaller than LSE, which means that the new method performs better. Furthermore, the D of the new method is small, which means that the grouping is accurate.
In Figure 4, we show the change in D and RMSE with T and N. D decreases as T increases and RMSE decreases as N increases.
Finally, we consider a time-series model with a trend term, and define
y i , t = u 0 , i + u 1 , i t + 0.3 ε i , t 1 + ε i , t , i I 1 , 0 < t t 1 ( u 0 , i 1 ) + u 1 , i t + 0.3 ε i , t 1 + ε i , t , i I 1 , t 1 < t T u 0 , i + u 1 , i t + 0.3 ε i , t 1 + ε i , t , i I 2 , 0 < t t 2 u 0 , i + ( u 1 , i + 0.2 ) t + 0.3 ε i , t 1 + ε i , t , i I 2 , t 2 < t T u 0 , i + u 1 , i t + 0.3 ε i , t 1 + ε i , t , i I 3 , 0 < t t 3 ( u 0 , i + 0.2 ) + ( u 1 , i 0.2 ) t + 0.3 ε i , t 1 + ε i , t , i I 3 , t 3 < t T .
where ε i , t N ( 0 , 1 ) , u 0 , i U ( 0 , 1 ) , u 1 , i U ( 0 , 0.5 ) , and the true change-point t 1 = 0.5 T , t 2 = 0.65 T , and t 3 = 0.35 T . We define the square loss function as
γ ( y i , j ; θ ) = ( y i , j θ 0 i θ 1 ) 2 .
Table 6 shows the empirical probability of the estimated number of groups. The results indicate that using MDL can estimate the number of groups with a high empirical probability. Figure 5 shows the change in empirical probability. Notably, the empirical probability approaches 1 as N increases.
In Table 7, we display the D and RMSE of the new estimator. The new method performs well on the time-series model with a trend. Figure 6 shows that D decreases with an increase in T; this means that the grouping is more and more accurate. Further, RMSE decreases with an increase in N, which means that the estimates are improving.
In the case of one change-point, the results can be summarized as follows: the new method performs better than LSE. With fixed T, as N increases, the empirical probability of choosing the right number of groups approaches 1 and the RMSE becomes smaller. With fixed N, the set coverage becomes smaller as T increases.
For multiple change-points, consider the AR(1) model, and define
x i , t = 0.1 x i , t 1 + ε i , t , i I 1 0.2 x i , t 1 + ε i , t , i I 2 0.15 x i , t 1 + ε i , t , i I 3 ,
and
y i , t = x i , t , i I 1 , 0 < t t 1 , t 2 < t T x i , t + u i , i I 1 , t 1 < t t 2 x i , t + u i , i I 2 , 0 < t t 3 x i , t , i I 2 , t 3 < t t 2 x i , t u i , i I 2 , t 2 < t T x i , t , i I 3 , 0 < t t 4 x i , t + u i , i I 3 , t 4 < t T ,
where u i U ( 0.5 , 1 ) , u i U ( 1 , 1.5 ) and t 1 , t 2 , t 3 , t 4 changes with T, when T = 80 , t 1 = 30 , t 2 = 60 , t 3 = 20 , t 4 = 40 ; when T = 100 , t 1 = 40 , t 2 = 70 , t 3 = 30 , t 4 = 50 ; and when T = 120 , t 1 = 50 , t 2 = 90 , t 3 = 40 , t 4 = 60 .
Table 8 shows the empirical probability of the estimated number of groups. An accurate group number estimation can be obtained by using the MDL criterion. To illustrate the iterative convergence rate of Algorithm 1, we show the curve of coverage (D) versus s in Figure 7. The algorithm reaches convergence after five iterations. According to Ref. [13], when the number of change-points is unknown, we use LSE combined with AIC or BIC penalty to detect. The statistic is defined as
S S R ( t 1 : s ) = i = 1 N ( S i T ( t 1 : s ) + s β ) ,
where the number of change-points s is unknown; β = 2 for AIC penalty and β = log T for BIC penalty. Table 9 presents the D, F, and H D of the new method and LSE. Clearly, the new method divides the groups accurately, and accurately obtains the number and position of change-points in each group. Using AIC penalty, the number of change-points can be obtained accurately, while the BIC penalty is less than the real number of change-points.
Although Ref. [29] can not be applied to the above model, we can set the mean within the group to be the same and define the following model:
y i , t = x i , t , i I 1 , 0 < t t 1 , t 2 < t T x i , t + 1 , i I 1 , t 1 < t t 2 x i , t + 1 , i I 2 , 0 < t t 3 x i , t , i I 2 , t 3 < t t 2 x i , t 1 , i I 2 , t 2 < t T x i , t , i I 3 , 0 < t t 4 x i , t + 1 , i I 3 , t 4 < t T ,
This model is equivalent to taking all the regression variables in Ref. [29] as 1, and
β 1 , t = 0 , 0 < t t 1 , t 2 < t T 1 , t 1 < t t 2 , β 2 , t = 1 , 0 < t t 3 0 , t 3 < t t 2 1 , t 2 < t T , β 3 , t = 0 , 0 < t t 4 1 , t 4 < t T .
Then, we compare the new method with Ref. [29] under this model. The tuning parameter λ in Ref. [29] is selected by searching on the interval [1, 10,000] with 100 evenly-distributed logarithmic grids. We present the results of the new method and Ref. [29] in Table 10 (for this case, given G = 3 , the new method has a probability of less than 1% to split the panel into two groups. The results presented here do not include this). The grouping of [29] is much better than that of the new method. This may be because Ref. [29] required the same model parameters within the group and utilized this information. For the estimation of the number and position of change-points, the performance of the two methods is similar. However, when the mean within the group is different, the new method can be applied to solve, and Ref. [29] cannot.
Last, we implement the method of Ref. [29] in Python and give the computation time in Table 11 (here is the average time of 100 replications; the CPU is an 11th Gen Intel Core i5-1135G7). It can be observed that the new method is faster than Ref. [29]. This may be because the objective function of Ref. [29] is complex and the parameters need to be tuned.

4. Applications

4.1. Stock Dataset

We first apply our approach to the stock dataset, where the model parameters (mean) are different for different stocks. We choose the closing price of FTSE, FCHI, GDXAI, MIB, AEX, GEM, Shanghai, Shenzhen, CSI 300, and CHINA SME 100 ETFs for our analysis. It is panel data with N = 10 . The data come from the choice financial terminal, where we take a sample of the weekly closing prices of 10 stocks from June 2019 to May 2023. Our method is applied assuming that the mean of the data within each segment is a constant, and using the square loss function. Since the number of groups is unknown, we use Algorithm 2 to solve the problem, and Table 12 shows the values of the statistics (5) for the different G.
According to the results, the 10 stocks are divided into two groups. The first group contains five European stocks (FTSE, FCHI, GDXAI, MIB, and AEX) and the second group contains five Chinese stocks (Shanghai, Shenzhen, CSI 300, GEM, and CHINA SME 100 ETF). This may be due to the differences in economic systems and market structures between China and Europe. In each group, we can obtain the number and position of change-points by minimizing the penalty cost function (8). Table 13 presents the positions of change-points and the penalty costs obtained using binary segmentation method in order, where Q ( 0 ) represents the penalty cost with no change-point, Q ( τ ^ ) represents the penalty cost with a change-point at τ ^ , and τ ^ is the change-point when Q ( 0 ) > Q ( τ ^ ) .
From Table 13, the first group has six change-points and the second group has three change-points. Figure 8 displays the change-point positions of two groups. It can be observed that there is a jump in the mean before and after the estimated change-point in Figure 8. Some of the detected change-points can be readily associated with historical events. For Europe, the first change-point occurred in the last week of February 2020 due to the outbreak of COVID-19. The second change-point occurred at the end of May 2020, the third in November 2020, the fourth in March 2021, the fifth at the end of February 2022 (perhaps because of the Russia–Ukraine war), and the last in early January 2023. For China, the first change-point came in early July 2020, which may be related to the control of COVID-19, continued recovery of the domestic economy, and implementation of relevant government policies. The second change-point occurred at the end of December 2020, and the last in March 2022, which may be due to the Russia–Ukraine war and COVID-19.

4.2. Breast Cancer Dataset

Then, the new method is applied to a multidimensional dataset. The dataset is a built-in one for sklearn.datasets containing data on the malignant/benign (1/0) category of breast cancer for 569 patients recorded in Wisconsin; the dataset also includes data on the physiological indicators corresponding to 30 dimensions. The mean values of some indicators are significantly different between benign and malignant, while some indicators are the same. Thus, the new method can be applied to find which physiological indicators can distinguish between benign and malignant. First, we rearrange the data, placing 212 benign data in the front and 357 malignant data in the back so that the true change-point is 212. We treat each physiological indicator as a sequence and convert multidimensional data into panel data for analysis. Since each indicator has one change-point and G = 2 , we can use equation (2) to obtain the grouping and change-point position. The results are shown in Figure 9. The mean symmetry, radius error, area error, and concave points error do not change in 212, while the remaining indicators, such as mean radius and mean texture, have changed ( Q ( 0 ) = 14,794.00 , Q ( 212 ) = 10,106.22 ). We can see that there is a jump in the mean before and after the estimated change-point. This indicates that the indicators of mean symmetry, radius error, area error, and concave points error can not be used to distinguish between benign and malignant, but other indicators can.

5. Conclusions

Many panel datasets have emerged in finance and economics in recent years. Panel data are a two-dimensional collection of time-series and cross-sectional data, which can provide more information. However, due to many factors, the structure of panel data may change, such as the impact of the financial crisis on stock prices. Research on the change in panel data structure is of great significance for reasonably and appropriately grasping economic and financial phenomena and preventing risks. Although each panel can be regarded as single time-series data for detection, this approach is not as accurate as multi-panel change-point detection. Here, a new statistic and an iterative algorithm are proposed to simultaneously detect and estimate the change-points in panel data.
The main contributions of this paper are as follows: A new statistic is constructed and an iterative algorithm is proposed to simultaneously detect and estimate change-points. The new method solves the problem that there are different change-points and model parameters between panels. In addition, the new algorithm takes less time to solve. Through a large number of simulation experiments, we find the following: (1) The new method can accurately obtain the number of groups. For fixed T, when N increases, the empirical probability of choosing the right group increases; (2) Given the number of groups, the new method can accurately group panels. For fixed N, when T increases, the grouping effect improves; (3) Given the number of groups, the new method is better than LSE in estimating the position of change-points. For fixed T, with the increase of N, the root mean square error becomes smaller and smaller; (4) The new method can be applied to multiple change-points to obtain the grouping, the number, and the position of change-points in each group accurately. In the applications, applying the new method to the Stock dataset, we can group multiple stocks and obtain each group of change-points; applying the new method to the multidimensional Breast Cancer dataset, we can group dimensions and obtain the dimension with change-points to distinguish between benign and malignant.
Finally, we construct the estimator statistics by assuming that the panels are independent of each other. However, there may be correlations between panels in practice. For example, some stocks or neighboring regions will affect each other. The next challenge for us is to develop better estimators by using the correlations between panels.

Author Contributions

Methodology, H.L. and D.W.; Writing—original draft, H.L.; Writing—review & editing, D.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China ( Grant no. NSFC 12171033).

Data Availability Statement

Data are public. We give the data source in the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Page, E.S. Continuous Inspection Schemes. Biometrika 1954, 41, 100–115. [Google Scholar] [CrossRef]
  2. Bai, J. Least squares estimation of a shift in linear processes. J. Time Ser. Anal. 1994, 15, 453–472. [Google Scholar] [CrossRef]
  3. Bai, J. Estimating multiple breaks one at a time. Econom. Theory 1997, 13, 315–352. [Google Scholar] [CrossRef]
  4. Davis, R.A.; Lee, T.C.M.; Rodriguez-Yam, G.A. Structural break estimation for nonstationary time series models. J. Am. Stat. Assoc. 2006, 101, 223–239. [Google Scholar] [CrossRef]
  5. Aue, A.; Hörmann, S.; Horváth, L.; Reimherr, M. Break detection in the covariance structure of multivariate time series models. Ann. Stat. 2009, 37, 4046–4087. [Google Scholar] [CrossRef]
  6. Chan, N.H.; Yau, C.Y.; Zhang, R.M. Group LASSO for structural break time series. J. Am. Stat. Assoc. 2014, 109, 590–599. [Google Scholar] [CrossRef]
  7. Kao, C.; Trapani, L.; Urga, G. Testing for instability in covariance structures. Bernoulli 2018, 24, 740–771. [Google Scholar] [CrossRef]
  8. Cho, H.; Fryzlewicz, P. Multiple-change-point detection for high dimensional time series via sparsified binary segmentation. J. R. Stat. Soc. Ser. Stat. Methodol. 2015, 77, 475–507. [Google Scholar] [CrossRef]
  9. Dette, H.; Pan, G.; Yang, Q. Estimating a Change Point in a Sequence of Very High-Dimensional Covariance Matrices. J. Am. Stat. Assoc. 2022, 117, 444–454. [Google Scholar] [CrossRef]
  10. Dette, H.; Gösmann, J. A likelihood ratio approach to sequential change point detection for a general class of parameters. J. Am. Stat. Assoc. 2020, 115, 1361–1377. [Google Scholar] [CrossRef]
  11. Joseph, L.; Wolfson, D.B. Estimation in multi-path change-point problems. Commun. Stat. Theory Methods 1992, 21, 897–913. [Google Scholar] [CrossRef]
  12. Joseph, L.; Wolfson, D.B. Maximum likelihood estimation in the multi-path change-point problem. Ann. Inst. Stat. Math. 1993, 45, 511–530. [Google Scholar] [CrossRef]
  13. Bai, J. Common breaks in means and variances for panel data. J. Econom. 2010, 157, 78–92. [Google Scholar] [CrossRef]
  14. Baltagi, B.H.; Feng, Q.; Kao, C. Estimation of heterogeneous panels with structural breaks. J. Econom. 2016, 191, 176–195. [Google Scholar] [CrossRef]
  15. Baltagi, B.H.; Kao, C.; Liu, L. Estimation and identification of change points in panel models with nonstationary or stationary regressors and error term. Econom. Rev. 2017, 36, 85–102. [Google Scholar] [CrossRef]
  16. Chen, Z.; Hu, Y. Cumulative sum estimator for change-point in panel data. Stat. Pap. 2017, 58, 707–728. [Google Scholar] [CrossRef]
  17. Pešta, M.; Peštová, B.; Maciak, M. Changepoint estimation for dependent and non-stationary panels. Appl. Math. 2020, 65, 299–310. [Google Scholar] [CrossRef]
  18. Bardwell, L.; Fearnhead, P.; Eckley, I.A.; Smith, S.; Spott, M. Most recent changepoint detection in panel data. Technometrics 2019, 61, 88–98. [Google Scholar] [CrossRef]
  19. Lumsdaine, R.L.; Okui, R.; Wang, W. Estimation of panel group structure models with structural breaks in group memberships and coefficients. J. Econom. 2023, 233, 45–65. [Google Scholar] [CrossRef]
  20. Emerson, J.; Kao, C. Testing for Structural Change of a Time Trend Regression in Panel Data; Center for Policy Research Working Papers 15; Center for Policy Research, Maxwell School, Syracuse University: Syracuse, NY, USA, 2000. [Google Scholar]
  21. Horváth, L.; Hušková, M. Change-point detection in panel data. J. Time Ser. Anal. 2012, 33, 631–648. [Google Scholar] [CrossRef]
  22. Peštová, B.; Pešta, M. Testing structural changes in panel data with small fixed panel size and bootstrap. Metrika 2015, 78, 665–689. [Google Scholar] [CrossRef]
  23. Chen, B.; Huang, L. Nonparametric testing for smooth structural changes in panel data models. J. Econom. 2018, 202, 245–267. [Google Scholar] [CrossRef]
  24. Chen, Z.; Hu, Y. Asymptotic and Bootstrap Tests for a Sequential Change-Point of Panel. Wuhan Univ. J. Nat. Sci. 2019, 24, 329–340. [Google Scholar] [CrossRef]
  25. Antoch, J.; Jan Hanousek, L.H.M.H.; Wang, S. Structural breaks in panel data: Large number of panels and short length time series. Econom. Rev. 2019, 38, 828–855. [Google Scholar] [CrossRef]
  26. Jiang, P.; Kurozumi, E. A new test for common breaks in heterogeneous panel data models. Econom. Stat. 2023. [Google Scholar] [CrossRef]
  27. Feng, Q.; Kao, C. Large-Dimensional Panel Data Econometrics: Testing, Estimation and Structural Changes; World Scientific: Singapore, 2021. [Google Scholar]
  28. Qian, J.; Su, L. Shrinkage estimation of common breaks in panel data models via adaptive group fused lasso. J. Econom. 2016, 191, 86–109. [Google Scholar] [CrossRef]
  29. Okui, R.; Wang, W. Heterogeneous structural breaks in panel data models. J. Econom. 2021, 220, 447–473. [Google Scholar] [CrossRef]
  30. Reese, J. Solution methods for the p-median problem: An annotated bibliography. Networks 2006, 48, 125–142. [Google Scholar] [CrossRef]
  31. Fearnhead, P.; Rigaill, G. Changepoint detection in the presence of outliers. J. Am. Stat. Assoc. 2019, 114, 169–183. [Google Scholar] [CrossRef]
  32. Truong, C.; Oudre, L.; Vayatis, N. Selective review of offline change point detection methods. Signal Process. 2020, 167, 107299. [Google Scholar] [CrossRef]
  33. Lavielle, M.; Moulines, E. Least-squares estimation of an unknown number of shifts in a time series. J. Time Ser. Anal. 2000, 21, 33–59. [Google Scholar] [CrossRef]
  34. Ditzen, J.; Karavias, Y.; Westerlund, J. Testing and estimating structural breaks in time series and panel data in Stata. arXiv 2021, arXiv:2110.14550. [Google Scholar]
Figure 1. AR(1) model: Change of empirical probability with T = 100 .
Figure 1. AR(1) model: Change of empirical probability with T = 100 .
Mathematics 12 00750 g001
Figure 2. AR(1) model: Change of coverage and root mean square error with N = 100 (upper) and T = 100 (lower).
Figure 2. AR(1) model: Change of coverage and root mean square error with N = 100 (upper) and T = 100 (lower).
Mathematics 12 00750 g002aMathematics 12 00750 g002b
Figure 3. MA(2) model: Change of empirical probability with T = 100 .
Figure 3. MA(2) model: Change of empirical probability with T = 100 .
Mathematics 12 00750 g003
Figure 4. MA(2) model: Change of coverage and root mean square error with N = 100 (upper) and T = 100 (lower).
Figure 4. MA(2) model: Change of coverage and root mean square error with N = 100 (upper) and T = 100 (lower).
Mathematics 12 00750 g004
Figure 5. Trend model: Change of empirical probability with T = 100 .
Figure 5. Trend model: Change of empirical probability with T = 100 .
Mathematics 12 00750 g005
Figure 6. Trend model: Change of coverage and root mean square error with N = 100 (upper) and T = 100 (lower).
Figure 6. Trend model: Change of coverage and root mean square error with N = 100 (upper) and T = 100 (lower).
Mathematics 12 00750 g006
Figure 7. The convergence rate of the algorithm: the curve of D vs. s.
Figure 7. The convergence rate of the algorithm: the curve of D vs. s.
Mathematics 12 00750 g007
Figure 8. Change-point detection positions for European (left) and Chinese (right) stocks.
Figure 8. Change-point detection positions for European (left) and Chinese (right) stocks.
Mathematics 12 00750 g008
Figure 9. No change in physiological indicators (left) and some change in physiological indicators (right).
Figure 9. No change in physiological indicators (left) and some change in physiological indicators (right).
Mathematics 12 00750 g009
Table 1. AR(1) model: Empirical probability of group number selection using MDL when G = 3 .
Table 1. AR(1) model: Empirical probability of group number selection using MDL when G = 3 .
NT12345
10080000.8830.0800.037
100100000.9170.0600.023
100120000.9290.0500.021
12080000.9240.0630.013
120100000.9330.0540.013
120120000.9590.0360.005
Table 2. AR(1) model: Coverage and root mean square error of the new method and LSE.
Table 2. AR(1) model: Coverage and root mean square error of the new method and LSE.
N 100 120
T 8010012080100120
newD0.23630.19770.16420.23630.19510.1652
RMSE0.51320.51280.42350.42030.34930.2658
LSERMSE0.85950.63560.58570.58820.47610.4207
Table 3. AR(1) model: Root mean square error of the new method with G = 4 .
Table 3. AR(1) model: Root mean square error of the new method with G = 4 .
N 100 120
T 8010012080100120
new0.60390.59610.57070.49970.47360.4041
Table 4. MA(2) model: Empirical probability of group number selection using MDL when G = 3 .
Table 4. MA(2) model: Empirical probability of group number selection using MDL when G = 3 .
NT12345
10080000.9070.0830.010
100100000.9110.0770.012
100120000.9160.0750.009
12080000.9540.0360.010
120100000.9440.0490.007
120120000.9370.0590.004
Table 5. MA(2) model: Coverage and root mean square error of the new method and LSE.
Table 5. MA(2) model: Coverage and root mean square error of the new method and LSE.
N 100 120
T 8010012080100120
newD0.22800.18150.14690.22820.18100.1478
RMSE0.46580.39410.35680.34300.28810.2702
LSERMSE1.66751.21630.99850.94550.92870.8738
Table 6. Trend model: Empirical probability of group number selection using MDL when G = 3 .
Table 6. Trend model: Empirical probability of group number selection using MDL when G = 3 .
NT12345
1008000.0020.9610.0360.001
100100000.9540.0400.006
100120000.9630.0360.001
12080000.9800.0190.001
120100000.9820.0160.002
120120000.9810.0160.003
Table 7. Trend model: Coverage and root mean square error of the new method.
Table 7. Trend model: Coverage and root mean square error of the new method.
N 100 120
T 8010012080100120
newD0.12660.10890.09110.12610.10850.0918
RMSE0.11690.10170.08370.08940.07750.0707
Table 8. AR(1) model: Empirical probability of group number selection using MDL when G = 3 .
Table 8. AR(1) model: Empirical probability of group number selection using MDL when G = 3 .
NT12345
10080000.8790.1170.004
100100000.8800.1180.002
100120000.9430.0560.001
Table 9. Coverage, frequency, and Hausdorff distance of the new method and LSE.
Table 9. Coverage, frequency, and Hausdorff distance of the new method and LSE.
T80100120
D0.11060.10100.0796
New I 1 F111
HD/T0.00030.00050.0003
I 2 F0.9760.9810.999
HD/T0.00950.00370.0008
I 3 F0.9220.9270.962
HD/T0.03160.02350.0113
LSEAICF0.8980.8790.866
HD/T0.01450.01460.0116
BICF000
HD/T0.12800.10180.0857
Table 10. Coverage, frequency, and Hausdorff distance of the new method and GAGFL [29] with multiple change-points.
Table 10. Coverage, frequency, and Hausdorff distance of the new method and GAGFL [29] with multiple change-points.
Method The New Method GAGFLC [29]
T 8010012080100120
D0.13750.12120.10940.00540.00220.0019
I 1 F0.9710.9750.9820.9500.9630.938
HD/T0.01960.01190.00620.00540.00470.0068
I 2 F0.9530.9520.9540.8890.8960.901
HD/T0.01640.01010.00590.01410.01140.0110
I 3 F0.9990.9990.9850.9030.8960.893
HD/T0.00050.00040.00400.01670.02040.0186
Table 11. The computation time (s) of the new method and GAGFL.
Table 11. The computation time (s) of the new method and GAGFL.
T80100120
New2.403.345.02
GAGFL151.99226.90347.87
Table 12. The values of the statistics for the different G.
Table 12. The values of the statistics for the different G.
G12345
Q G 595.47576.15607.78632.26665.41
Table 13. The penalty cost of the estimated change-points.
Table 13. The penalty cost of the estimated change-points.
The First GroupThe Second Group
Order12345671234
τ ^ 91377414018150112551418130
Q ( 0 ) 1000.00273.22125.50177.26105.0551.0030.791000.00419.8291.9944.08
Q ( τ ^ ) 476.97173.9981.54162.3463.1750.3639.35490.39151.6975.4844.31
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, H.; Wang, D. Grouped Change-Points Detection and Estimation in Panel Data. Mathematics 2024, 12, 750. https://doi.org/10.3390/math12050750

AMA Style

Lu H, Wang D. Grouped Change-Points Detection and Estimation in Panel Data. Mathematics. 2024; 12(5):750. https://doi.org/10.3390/math12050750

Chicago/Turabian Style

Lu, Haoran, and Dianpeng Wang. 2024. "Grouped Change-Points Detection and Estimation in Panel Data" Mathematics 12, no. 5: 750. https://doi.org/10.3390/math12050750

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop