Next Article in Journal
MHD Micropolar Fluid in a Porous Channel Provoked by Viscous Dissipation and Non-Linear Thermal Radiation: An Analytical Approach
Next Article in Special Issue
Multicollinearity and Linear Predictor Link Function Problems in Regression Modelling of Longitudinal Data
Previous Article in Journal
Global Well-Posedness for the Compressible Nematic Liquid Crystal Flows
Previous Article in Special Issue
High-Dimensional Precision Matrix Estimation through GSOS with Application in the Foreign Exchange Market
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Equivalence Analysis of Statistical Inference Results under True and Misspecified Multivariate Linear Models

1
College of Mathematics and Information Science, Shandong Technology and Business University, Yantai 264005, China
2
College of Business and Economics, Shanghai Business School, Shanghai 201400, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(1), 182; https://doi.org/10.3390/math11010182
Submission received: 2 October 2022 / Revised: 4 December 2022 / Accepted: 21 December 2022 / Published: 29 December 2022
(This article belongs to the Special Issue Contemporary Contributions to Statistical Modelling and Data Science)

Abstract

:
This paper provides a complete matrix analysis on equivalence problems of estimation and inference results under a true multivariate linear model Y = X Θ + Ψ and its misspecified form Y = X Θ + Z Γ + Ψ with an augmentation part Z Γ through the cogent use of various algebraic formulas and facts in matrix analysis. The coverage of this study includes the matrix derivations of the best linear unbiased estimators under the true and misspecified models, and the establishment of necessary and sufficient conditions for the different estimators to be equivalent under the model assumptions.

1. Introduction

Throughout this article, we use R m × n to stand for the collection of all m × n matrices with real numbers; A , r ( A ) , and R ( A ) to stand for the transpose, the rank, and the range (column space) of a matrix A R m × n , respectively; and I m to denote the identity matrix of order m. For two symmetric matrices A and B of the same size, they are said to satisfy the inequality A B in the Löwner partial ordering if A B is positive semi-definite. The Kronecker product of any two matrices A and B is defined to be A B = ( a i j B ) . The vectorization operator of a matrix A = [ a 1 , , a n ] is defined to be vec ( A ) = A = [ a 1 , , a n ] . A well-known property on the vec operator of a triple matrix product is A Z B = ( B A ) Z . The Moore–Penrose inverse of A R m × n , denoted by A + , is defined to be the unique solution G of the four matrix equations A G A = A , G A G = G , ( A G ) = A G , and ( G A ) = G A . We also denote by P A = A A + , E A = I m A A + , and F A = I n A + A the three orthogonal projectors induced from A , respectively, which will help in briefly denoting calculation processes related to generalized inverses of matrices. We also adopt the notation A = E A when A is a block matrix. Further information about the orthogonal projectors P A , E A , and F A with their applications in the linear statistical models can be found e.g., in [1,2,3].
In this paper, we consider the multivariate general linear model
M : Y = X Θ + Ψ , E ( Ψ ) = 0 , Cov ( Ψ ) = Cov { Ψ , Ψ } = σ 2 Σ 2 Σ 1 ,
where Y R n × m is an observable random matrix (a longitudinal data set), X = ( x i j ) R n × p is a known model matrix of arbitrary rank ( 0 r ( X ) min { n , p } ) , Θ = ( θ i j ) R p × m is a matrix of fixed but unknown parameters, E ( Ψ ) and Cov ( Ψ ) denote the expectation vector and the dispersion matrix of the random error matrix Ψ R n × m , Σ 1 = ( σ 1 i j ) R n × n and Σ 2 = ( σ 2 i j ) R m × m are two known positive semi-definite matrices of arbitrary ranks, and σ 2 is an arbitrary positive scaling factor. As we know, the multivariate general linear model (for short, MGLM) as such in (1) is a relative direct extension of the most welcome type of univariate general linear models.
The assumption in (1) is typical in the estimation and statistical inference under a multivariate linear regression framework. In statistical practice, we may meet with the situation where a true regression model is misspecified in some other forms due to certain unforeseeable reasons, and, therefore, we face with the task of comparing estimation and inference results and establishing certain links between them for the purpose of reasonably explaining and utilizing the misspecified regression model. In this light, one of the situations in relation to model misspecification problems appears by adding or deleting regressors in the model. As such an example, if taking (1) as a true model and misspecifically adding a multiple new regressor part Z Γ in (1), we obtain an over-parameterized (over-fitted) form of M as
N : Y = X Θ + Z Γ + Ψ ,
where Z R n × q is a known matrix of arbitrary rank, and Γ R q × m is a matrix of fixed but unknown parameters. Given (1) and (2), we proposed and studied some research problems in [4] on the equivalence of inference results that are obtained from the two competing MGLMs.
As we know, a commonly-used technique of handling a partitioned model is to multiply a certain annihilating matrix and to transform the the model equation into a reduced model form. As a new exploration regarding the equivalence problem, we introduce the commonly-used technique into the study of (1) and (2). To do so, we pre-multiply Z to the both sides of the model equation and noting that Z Z = 0 to obtain a reduced model as follows:
P : Z Y = Z X Θ + Z Ψ .
It should be pointed out that estimation and inference results that we derive from the triple models in (1)–(3) are not necessarily identical. Thus, it is a primary requirement to describe the links between the models and to propose and describe possible equalities among estimation and inference results under three MGLMs.
Before approaching comparison problems of estimation and inference results under the triple models in (1)–(3), we mention a well-known and effective method that was widely used in the investigation of multivariate general linear models. Recall that the Kronecker products and vec operations of matrices are popular tools in dealing with matrix operations in relation to multivariate general linear models. Referring to these operations, we can alternatively represent the triple models in (1)–(3) in the following three standard linear statistical models:
M ^ : Y = ( I m X ) Θ + Ψ ,
N ^ : Y = ( I m X ) Θ + ( I m Z ) Γ + Ψ ,
P ^ : ( I Z ) Y = ( I m Z X ) Θ + ( I m Z ) Ψ .
As a common fact in statistical analysis, we know that the first step in the inference of (1) is to estimate/predict certain functions of the unknown parameter matrices Θ and Ψ . Based on this consideration, it is of great interest to identify their estimators and predictors simultaneously. For this purpose, we construct a general parametric matrix that involves both Θ and Ψ as follows:
Φ = K Θ + J Ψ , Φ = ( I m K ) Θ + ( I m J ) Ψ ,
where K and J are k × p and k × n matrices, respectively. In this situation, we easily obtain that
E ( Φ ) = K Θ , Cov ( Φ ) = σ 2 ( I m J ) ( Σ 2 Σ 1 ) ( I m J ) ,
Cov { Φ , Y } = Cov { J Ψ , Ψ } = σ 2 ( I m J ) ( Σ 2 Σ 1 )
hold. Under the assumptions in the triple models in (1)–(3), the corresponding predictions of Φ in (7) are not necessarily identical, and we even use the same optimality criterion to derive the predictors of Φ under the triple competing models, and therefore, this fact leads us to propose and study a series of research problems regarding the comparison and equivalence issues about inference results obtained from the triple models. In order to obtain general results and facts under (1)–(8), we do not require probability distributions of the random variables in the MGLMs although they are necessary for further discussing identification and test problems.
The purpose of this paper is to consider some concrete problems on the comparisons of the best linear unbiased estimators derived from (1) and those derived from (2) and (3). Historically, there were some previous investigations on establishing possible equalities of estimations of unknown parameter matrices in two competing linear models; see e.g., [5,6], while equalities of estimations of unknown parameter vectors under linear models with new regressors (augmentation by nuisance parameters) were approached in [7,8,9,10,11,12,13,14,15,16,17]. Particularly, the present two authors studied in [4] the equivalences of estimation and inference results under (1), (2), (4), and (5). As an updated work on this subject, we introduce the two reduced models in (3) and (6), and carry out a new analysis of the equivalences of estimators under (1)–(6).
The remaining of this paper is constructed as follows: In Section 2, we introduce some matrix analysis tools that can be used to characterize equalities that involve algebraic operations of matrices and their generalized inverses. In Section 3, the authors present a standard procedure to describe the predictability and estimability of parametric matrices under the triple models in (1)–(3), and then show how to establish analytical expressions for calculating best linear unbiased predictors and best linear unbiased estimators of parametric matrices under the triple models in (1)–(3). In Section 4, the authors discuss a group of problems on the equivalences of the BLUEs under (1)–(3).

2. Some Preliminaries

In order to establish the proposed mathematical equalities for predictors/estimators udder the triple models in (1)–(3), we need to use a series of basic rank equalities in the following two lemmas:
Lemma 1
([18]). Let A R m × n , B R m × k , C R l × n and D R l × k . Then,
r [ A , B ] = r ( A ) + r ( E A B ) = r ( B ) + r ( E B A ) ,
r A C = r ( A ) + r ( C F A ) = r ( C ) + r ( A F C ) ,
r A B C 0 = r ( B ) + r ( C ) + r ( E B A F C ) ,
r A A B B 0 = r [ A , B ] + r ( B ) ,
r A B C D = r ( A ) + r ( D C A + B ) i f R ( B ) R ( A ) a n d R ( C ) R ( A ) .
In particular, the following results hold:
(a)
r [ A , B ] = r ( A ) R ( B ) R ( A ) A A + B = B E A B = 0 .
(b)
r A C = r ( A ) R ( C ) R ( A ) C A + A = C C F A = 0 .
(c)
r [ A , B ] = r ( A ) + r ( B ) R ( A ) R ( B ) = { 0 } R [ ( E A B ) ] = R ( B ) R ( ( E B A ) ) = R ( A ) .
(d)
r A C = r ( A ) + r ( C ) R ( A ) R ( C ) = { 0 } R ( C F A ) = R ( C ) R ( A F C ) = R ( A ) .
A special consequence of (14) is given below, which we shall use to simplify some complicated matrix expressions that involve generalized inverses in the sequel.
Lemma 2.
Assume that five matrices A 1 , B 1 , A 2 , B 2 , and A 3 of appropriate sizes satisfy the conditions R ( A 1 ) R ( B 1 ) , R ( A 2 ) R ( B 1 ) ,   R ( A 2 ) R ( B 2 ) , and R ( A 3 ) R ( B 2 ) . Then,
r ( A 1 B 1 + A 2 B 2 + A 3 ) = r 0 B 2 A 3 B 1 A 2 0 A 1 0 0 r ( B 1 ) r ( B 2 ) .
Hence,
A 1 B 1 + A 2 B 2 + A 3 = 0 r 0 B 2 A 3 B 1 A 2 0 A 1 0 0 = r ( B 1 ) + r ( B 2 ) .
Matrix rank formulas and their consequences, as displayed in Lemmas 1 and 2, now are highly recognized as useful techniques to construct and characterize various simple or complicated algebraic equalities for matrices and their operations. We refer the reader to [19] and the references therein on the matrix rank method in the investigations of various linear statistical models.
Lemma 3
([20]). The linear matrix equation A X = B is consistent if and only if r [ A , B ] = r ( A ) , or equivalently, A A + B = B . In this case, the general solution of the equation can be written in the following parametric form: X = A + B + ( I A + A ) U , where U is an arbitrary matrix.
Finally, we present the following established result on constrained quadratic matrix-valued function minimization problem.
Lemma 4
([21,22]). Let
f ( L ) = ( L C + D ) M ( L C + D ) s . t . L A = B ,
where A R p × q , B R n × q , C R p × m , D R n × m are given, M R m × m is positive semi-definite, and the matrix equation L A = B is consistent. Then, there always exists a solution L 0 of L 0 A = B such that
f ( L ) f ( L 0 )
holds for all solutions of L A = B . In this case, the matrix L 0 satisfying (18) is determined by the following consistent matrix equation:
L 0 [ A , C M C A ] = [ B , D M C A ] ,
while the general expression of L 0 and the corresponding f ( L 0 ) are given by
L 0 = argmin L A = B f ( L ) = [ B , D M C A ] [ A , C M C A ] + + V [ A , C M C ] ,
f ( L 0 ) = min L A = B f ( L ) = K M K K M C ( A C M C A ) + C M K ,
f ( L ) f ( L 0 ) = ( L C M C A + D M C A ) ( A C M C A ) + ( L C M C A + D M C A ) ,
where K = B A + C + D , and V R n × p is arbitrary.

3. The Precise Theory of Predictability, Estimability, and BLUP/BLUE

In this section, we present a standard procedure of establishing predictability, estimability, and BLUP theory under an MGLM for the purpose of solving the comparison problems proposed in Section 1. Most of the materials given below are routine illustrations of various known conceptions, definitions, and fundamental results and facts on MGLMs; see e.g., [4].
Definition 1.
Let Φ be as given in (7). Then,
(a)
Φ is said to be predictable under (1) if there exists a k × n matrix L such that E ( L Y Φ ) = 0 ;
(b)
Φ is said to be predictable under (4) if there exists an m k × m n matrix L such that E ( L Y Φ ) = 0 .
Definition 2.
Let Φ be as given in (7). Then,
(a)
Given that Φ is predictable under (1), if there exists a matrix L such that
Cov ( L Y Φ ) = min s . t . E ( L Y Φ ) = 0
holds in the Löwner partial ordering, the linear statistic L Y is defined to be the best linear unbiased predictor (for short, BLUP) of Φ under (1), and is denoted by
L Y = BLUP M ( Φ ) = BLUP M ( K Θ + J Ψ ) .
If J = 0 or K = 0 in (7), the L Y satisfying (23) is called the best linear unbiased estimator (for short, BLUE) of K Θ and the BLUP of J Ψ under (1), respectively, and is denoted by
L Y = BLUE M ( K Θ ) , L Y = BLUP M ( J Ψ ) ,
respectively.
(b)
Given that Φ is predictable under (4), if there exists a matrix L such that
Cov ( L Y Φ ) = min s . t . E ( L Y Φ ) = 0
holds in the Löwner partial ordering, the linear statistic L Y is defined to be the BLUP of Φ under (4), and is denoted by
L Y = BLUP M ^ ( Φ ) = BLUP M ^ ( ( I m K ) Θ + ( I m J ) Ψ ) .
If J = 0 or K = 0 in (7), the L Y satisfying (24) is called the BLUE of ( I m K ) Θ and the BLUP of ( I m J ) Ψ under (4), respectively, and is denoted by
L Y = BLUE M ^ ( ( I m K ) Θ ) , L Y = BLUP M ^ ( ( I m J ) Ψ ) ,
respectively.
Recall that the unbiasedness of given predictors/estimators and the lowest covariance matrices formulated in (23) and (24) are intrinsic requirements in statistic analysis of parametric regression models, which can be regarded as some special cases of mathematical optimization problems on constrained quadratic matrix-valued functions in the Löwner partial ordering. Note from (1) and (7) that L Y Φ and L Y Φ can be rewritten as
L Y Φ = L X Θ + L Ψ K Θ J Ψ = ( L X K ) Θ + ( L J ) Ψ ,
L Y Φ = ( I m ( L X K ) ) Θ + ( I m ( L J ) ) Ψ .
Hence, the expectations of L Y Φ and L Y Φ can be expressed as
E ( L Y Φ ) = ( L X K ) Θ , E ( L Y Φ ) = ( I m ( L X K ) ) Θ .
The dispersion matrix of L Y Φ can be expressed as
Cov ( L Y Φ ) = ( I m ( L J ) ) Cov ( Ψ ) ( I m ( L J ) ) = σ 2 ( I m ( L J ) ) ( Σ 2 Σ 1 ) ( I m ( L J ) ) = σ 2 Σ 2 ( L J ) Σ 1 ( L J ) = σ 2 Σ 2 f ( L ) ,
where f ( L ) = ( L J ) Σ 1 ( L J ) .
Concerning the predictability of Φ in (7), we have the following known result.
Lemma 5
([4]). Let Φ be as given in (7). Then, the following three statements are equivalent:
(a)
Φ is predictable by Y in (1).
(b)
R ( I m K ) R ( I m X ) .
(c)
R ( K ) R ( X ) .
Theorem 1.
Assume Φ in (7) is predictable. Then,
Cov ( L Y Φ ) = min s . t . E ( L Y Φ ) = 0 L [ X , Σ 1 X ] = [ K , J Σ 1 X ] .
The matrix equation in (29), called the BLUP equation associated with Φ , is consistent as well, i.e.,
[ K , J Σ 1 X ] [ X , Σ 1 X ] + [ X , Σ 1 X ] = [ K , J Σ 1 X ]
holds under Lemma 5(c), while the general expressions of L and the corresponding BLUP M ( Φ ) can be written as
BLUP M ( Φ ) = L Y = ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] ) Y ,
where U R k × n is arbitrary. In particular,
BLUE M ( K Θ ) = ( [ K , 0 ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] ) Y ,
BLUP M ( J Ψ ) = ( [ 0 , J Σ 1 X ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] ) Y ,
where U R k × n is arbitrary. Furthermore, the following results hold.
(a)
r [ X , Σ 1 X ] = r [ X , Σ 1 ] , R [ X , Σ 1 X ] = R [ X , Σ 1 ] , and R ( X ) R ( Σ 1 X ) = { 0 } .
(b)
L is unique if and only if r [ X , Σ 1 ] = n .
(c)
BLUP M ( Φ ) is unique if and only if R ( Y ) R [ X , Σ 1 ] holds with probability 1 .
(d)
The expectation, the dispersion matrices of BLUP M ( Φ ) and Φ BLUP M ( Φ ) , as well as the covariance matrix between BLUP M ( Φ ) and Φ are unique, and are given by
E ( BLUP M ( Φ ) Φ ) = 0 ,
Cov ( BLUP M ( Φ ) ) = σ 2 Σ 2 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + ) Σ 1 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + ) ,
Cov { BLUP M ( Φ ) , Φ } = σ 2 Σ 2 [ K , J Σ 1 X ] [ X , Σ X ] + Σ 1 J ,
Cov ( Φ ) Cov ( BLUP M ( Φ ) ) = σ 2 Σ 2 J Σ 1 J σ 2 Σ 2 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + ) Σ 1 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + ) ,
Cov ( Φ BLUP M ( Φ ) ) = σ 2 Σ 2 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + J ) Σ 1 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + J ) .
(e)
BLUP M ( Φ ) , BLUE M ( K Θ ) , and BLUP M ( J Ψ ) satisfy
BLUP M ( Φ ) = BLUE M ( K Θ ) + BLUP M ( J Ψ ) ,
Cov { BLUE M ( K Θ ) , BLUP M ( J Ψ ) } = 0 ,
Cov ( BLUP M ( Φ ) ) = Cov ( BLUE M ( K Θ ) ) + Cov ( BLUP M ( J Ψ ) ) .
(f)
BLUP M ( T Φ ) = T BLUP M ( Φ ) holds for any matrix T R t × k .
Proof. 
We obtain from (28) that the constrained minimization problem in (23) is equivalent to
Σ 2 f ( L ) Σ 2 f ( L 0 ) for   all   solutions   of   I m L X = I m K ,
which is further reduced to
f ( L ) f ( L 0 ) for   all   solutions   of   L X = K .
Since Σ 2 is a non-null nonnegative definite matrix, we apply Lemma 4 to (43) to yield the matrix equation
L 0 [ X , Σ 1 X ] = [ K , J Σ 1 X ] ,
as required for (29). Equations (32) and (33) follow directly from (31). Result (a) is well known on the matrix [ X , Σ 1 X ] ; see, e.g., [1,2].
Note that
[ X , Σ 1 X ] = 0 [ X , Σ 1 X ] [ X , Σ 1 X ] + = I n r [ X , Σ 1 X ] = r [ X , Σ 1 ] = n
by (10). Combining this fact with (31) leads to (b). Setting the term [ X , Σ 1 X ] Y = 0 in (31) leads to (c).
From (1) and (31),
Cov ( BLUP M ( Φ ) ) = ( I m L ) Cov ( Ψ ) ( I m L ) = σ 2 ( I m L ) ( Σ 2 Σ 1 ) ( I m L ) = σ 2 Σ 2 L Σ 1 L = σ 2 Σ 2 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] ) Σ 1 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] ) = σ 2 Σ 2 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + ) Σ 1 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + ) ,
thus establishing (35).
From (1) and (31),
Cov { BLUP M ( Φ ) , Φ } = ( I m L ) Cov ( Ψ ) ( I m J ) = σ 2 ( I m L ) ( Σ 2 Σ 1 ) ( I m J ) = σ 2 Σ 2 L Σ 1 J = σ 2 Σ 2 [ K , J Σ 1 X ] [ X , Σ X ] + Σ 1 J ,
establishing (36). Combining (8) and (35) yields (37). Substituting (31) into (28) and simplifying, we obtain
Cov ( Φ BLUP M ( Φ ) ) = σ 2 Σ 2 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] J ) Σ 1 × ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] J ) = σ 2 Σ 2 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + J ) Σ 1 ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + J ) .
Rewrite the arbitrary matrix U in (31) as U = U 1 + U 2 , and [ K , J Σ 1 X ] in (31) as [ K , J Σ 1 X ] = [ K , 0 ] + [ 0 , J Σ 1 X ] . Then, (31) can equivalently be represented as
BLUP M ( Φ ) = ( [ K , J Σ 1 X ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] ) Y = ( [ K , 0 ] [ X , Σ 1 X ] + + U 1 [ X , Σ 1 X ] ) Y + ( [ 0 , J Σ 1 X ] [ X , Σ 1 X ] + + U 2 [ X , Σ 1 X ] ) Y = BLUE M ( K Θ ) + BLUP M ( J Ψ ) ,
thus establishing (39).
From (32) and (33), the covariance matrix between BLUE M ( K Θ ) and BLUP M ( J Ψ ) is
Cov { BLUE M ( K Θ ) , BLUP M ( J Ψ ) } = σ 2 Σ 2 ( [ K , 0 ] [ X , Σ 1 X ] + + U 1 [ X , Σ 1 X ] ) Σ 1 ( [ 0 , J Σ 1 X ] [ X , Σ 1 X ] + + U 2 [ X , Σ 1 X ] ) = σ 2 Σ 2 ( [ K , 0 ] [ X , Σ 1 X ] + ) Σ 1 ( [ 0 , J Σ 1 X ] [ X , Σ 1 X ] + ) .
Applying (15) to the matrix product on the right-hand side of (44) and simplifying, we obtain
r ( [ K , 0 ] [ X , Σ 1 X ] + Σ 1 ( [ 0 , J Σ 1 X ] [ X , Σ 1 X ] + ) ) = r 0 X X Σ 1 0 X Σ 1 J [ X , Σ 1 X ] Σ 1 0 [ K , 0 ] 0 0 2 r [ X , Σ 1 X ] = r 0 0 0 X Σ 1 X X 0 0 X Σ 1 J [ X , 0 ] Σ 1 0 [ K , 0 ] 0 0 2 r [ X , Σ 1 ] = r 0 X X Σ 1 K 0 + r [ X Σ 1 X , X Σ 1 J ] 2 r [ X , Σ 1 ] = r X K + r X Σ 1 + r [ X , Σ 1 X , Σ 1 J ] r ( X ) 2 r [ X , Σ 1 ] ( by   ( 10 )   and   ( 13 ) ) = r ( X ) + r [ X , Σ 1 ] + r [ X , Σ 1 ] r ( X ) 2 r [ X , Σ 1 ] ( by   ( a ) ) = 0 ,
thus, the right-hand side of (44) is null, establishing (40). Equation (41) follows from (39) and (40). □
Concerning the BLUEs of K Θ , the mean matrix X Θ , and the BLUP of the error matrix Ψ in (1), we have the following results.
Corollary 1.
Let M be as given in (1). Then, the following facts hold.
(i)
K Θ is estimable under (1) ⇔ R ( K ) R ( X ) .
(ii)
The mean matrix X Θ is always estimable under (1).
In this case, the matrix equation
L [ X , Σ 1 X ] = [ K , 0 ]
is consistent, and the following results hold.
(a)
The general expression of BLUE M ( K Θ ) can be written as
BLUE M ( K Θ ) = [ K , 0 ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] Y
with
E ( BLUE M ( K Θ ) ) = K Θ ,
Cov ( BLUE M ( K Θ ) ) = σ 2 Σ 2 ( [ K , 0 ] [ X , Σ 1 X ] + ) Σ 1 ( [ K , 0 ] [ X , Σ 1 X ] + ) ,
where U R k × n is arbitrary.
(b)
The general expression of BLUE M ( X Θ ) can be written as
BLUE M ( X Θ ) = ( [ X , 0 ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] ) Y
with
E ( BLUE M ( X Θ ) ) = X Θ ,
Cov ( BLUE M ( X Θ ) ) = σ 2 Σ 2 ( [ X , 0 ] [ X , Σ 1 X ] + ) Σ 1 ( [ X , 0 ] [ X , Σ 1 X ] + ) ,
where U R n × n is arbitrary.
(c)
The general expression of BLUP M ( Ψ ) can be written as
BLUP M ( Ψ ) = ( [ 0 , Σ 1 X ] [ X , Σ 1 X ] + + U [ X , Σ 1 X ] ) Y = ( Σ 1 ( X Σ 1 X ) + + U [ X , Σ 1 X ] ) Y
with
Cov { BLUP M ( Ψ ) , Ψ } = Cov ( BLUP M ( Ψ ) ) = σ 2 Σ 2 Σ 1 ( X Σ 1 X ) + Σ 1 ,
Cov ( Ψ BLUP M ( Ψ ) ) = Cov ( Ψ ) Cov ( BLUP M ( Ψ ) ) = σ 2 Σ 2 Σ 1 σ 2 Σ 2 Σ 1 ( X Σ 1 X ) + Σ 1 ,
where U R n × n is arbitrary.
(d)
Y , BLUE M ( X Θ ) , and BLUP M ( Ψ ) satisfy
Y = BLUE M ( X Θ ) + BLUP M ( Ψ ) ,
Cov { BLUE M ( X Θ ) , BLUP M ( Ψ ) } = 0 ,
Cov ( Y ) = Cov ( BLUE M ( X Θ ) ) + Cov ( BLUP M ( Ψ ) ) .
The BLUEs under the over-parameterized model (2) can be formulated from the standard results on the BLUEs under the true model as follows.
Theorem 2.
Let N be as given in (2), and denote W = [ X , Z ] , K ^ = [ K , 0 ] , and X ^ = [ X , 0 ] . Then, the following facts hold.
(a)
K Θ is estimable under (2) ⇔ R ( K ^ ) R ( W ) R ( K ) R ( X Z ) .
(b)
X Θ is estimable under (2) ⇔ R ( X ) R ( Z ) = { 0 } .
In this case, the matrix equation
L [ W , Σ 1 W ] = [ K ^ , 0 ]
is consistent, and a BLUE of K Θ under (2) is
BLUE N ( K Θ ) = L Y = ( [ K ^ , 0 ] [ W , Σ 1 W ] + + U [ W , Σ 1 W ] ) Y ,
BLUE N ( K Θ ) = ( I m L ) Y = ( I m ( [ K ^ , 0 ] [ W , Σ 1 W ] + + U [ W , Σ 1 W ] ) ) Y ,
Cov ( BLUE N ( K Θ ) ) = σ 2 Σ 2 ( [ K ^ , 0 ] [ W , Σ 1 W ] + ) Σ 1 ( [ K ^ , 0 ] [ W , Σ 1 W ] + ) ,
where U R k × n is arbitrary. In particular,
BLUE N ( X Θ ) = ( [ X ^ , 0 ] [ W , Σ 1 W ] + + U [ W , Σ 1 W ] ) Y ,
BLUE N ( X Θ ) = ( I m L ) Y = ( I m ( [ X ^ , 0 ] [ W , Σ 1 W ] + + U [ W , Σ 1 W ] ) ) Y ,
Cov ( BLUE N ( X Θ ) ) = σ 2 Σ 2 ( [ X ^ , 0 ] [ W , Σ 1 W ] + ) Σ 1 ( [ X ^ , 0 ] [ W , Σ 1 W ] + ) ,
where U R n × n is arbitrary.
Proof. 
Note that K Θ can be rewritten as K Θ = [ K , 0 ] Θ Γ = K ^ Θ Γ under (2). Hence, K Θ is estimable under (2) if and only if R ( [ K , 0 ] ) R ( [ X , Z ] ) by Lemma 5, or equivalently,
r X Z K 0 = r [ X , Z ] .
In addition, note that r X Z K 0 = r Z X K + r ( Z ) and r [ X , Z ] = r ( Z X ) + r ( Z ) by (10). Hence, (65) is further equivalent to R ( K ) R ( ( Z X ) ) = R ( X Z ) , as required for (a). Let K = X in (65). Then, we obtain from (65) that
r X Z X 0 = r 0 Z X 0 = r ( X ) + ( Z ) .
Hence, (66) is equivalent to R ( X ) R ( Z ) = { 0 } , as required for (b). Equations (58)–(64) follow from the standard results on BLUEs in Corollary 1. □
The BLUEs of unknown parameter matrices under the transformed model in (3) can be formulated from the standard results on the BLUEs under the true model as follows.
Theorem 3.
Let P be as given in (3). Then, the following results hold:
(a)
K Θ is estimable under (3) ⇔ R ( K ) R ( X Z ) .
(b)
X Θ is estimable under (3) ⇔ R ( X ) R ( Z ) = { 0 } .
In this case, the matrix equation
L [ Z X , Z Σ 1 Z ( Z X ) ] = [ K , 0 ]
is consistent, and
BLUE P ( K Θ ) = L Z Y = ( [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + + U [ Z X , Z Σ 1 Z ( Z X ) ] ) Z Y ,
BLUE P ( K Θ ) = ( I m ( [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + Z + U [ Z X , Z Σ 1 Z ( Z X ) ] Z ) ) Y ,
Cov ( BLUE P ( K Θ ) ) = σ 2 Σ 2 ( [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + ) Z Σ 1 Z ( [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + ) ,
where U R k × n is arbitrary.

4. The Equivalence Analysis of BLUEs under True and Misspecified Models

Concerning equalities between two linear statistics G 1 Y and G 2 Y , the following three possible situations should be addressed.
Definition 3.
Let Y be a random matrix.
(a)
The equality G 1 Y = G 2 Y is said to hold definitely iff G 1 = G 2 .
(b)
The equality G 1 Y = G 2 Y is said to hold with probability 1 iff both E ( G 1 Y G 2 Y ) = 0 and Cov ( G 1 Y G 2 Y ) = 0 hold.
(c)
G 1 Y and G 2 Y are said to have the same expectation and covariance matrix iff both E ( G 1 Y ) = E ( G 2 Y ) and Cov ( G 1 Y ) = Cov ( G 2 Y ) hold.
Assume that K Θ is estimable under (2). Then, it can be seen from Corollary 1(i), Theorem 2(a), and Theorem 3(a) that K Θ is estimable under (1) and (3) as well. In this case, the BLUE of K Θ can be written as (46), while the BLUEs of K Θ under the misspecified models can be written as (59) and (68), respectively. The triple BLUEs are not necessarily the same, and thus it is natural to consider relations among the three BLUEs by Definition 3.
Theorem 4.
Assume that K Θ is estimable under (2) for K R k × p , and let BLUE M ( K Θ ) , BLUE N ( K Θ ) , and BLUE P ( K Θ ) be as given in (46), (59), and (68), respectively. In addition, let
M = Σ 1 X Z X 0 0 0 K 0 , N = Σ 1 X Z X 0 0 .
Then, the following eight statements are equivalent:
(a)
BLUE M ( K Θ ) = BLUE N ( K Θ ) holds definitely;
(b)
BLUE M ( K Θ ) = BLUE P ( K Θ ) holds definitely;
(c)
BLUE M ( K Θ ) = BLUE N ( K Θ ) holds with probability 1 ;
(d)
BLUE M ( K Θ ) = BLUE P ( K Θ ) holds with probability 1 ;
(e)
Cov ( BLUE M ( K Θ ) BLUE N ( K Θ ) ) = Cov ( BLUE M ( K Θ ) BLUE P ( K Θ ) ) = 0 ;
(f)
Cov ( BLUE M ( K Θ ) ) = Cov ( BLUE N ( K Θ ) ) ;
(g)
Cov ( BLUE M ( K Θ ) ) = Cov ( BLUE P ( K Θ ) ) .
(h)
r ( M ) = r ( N ) .
Proof. 
Combining (45) and (58), we obtain a new equation for L :
L [ X , Σ 1 X , W , Σ 1 W ] = [ K , 0 , K ^ , 0 ] .
This matrix equation has a solution for L if and only if
r X Σ 1 X W Σ 1 W K 0 K ^ 0 = r [ X , Σ 1 X , W , Σ 1 W ] .
Simplifying both sides by R ( W ) R ( X ) and elementary matrix operations, we obtain
r X Σ 1 X W Σ 1 W K 0 K ^ 0 = r Σ 1 X X Z 0 K 0 = r Σ 1 X Z X 0 0 0 K 0 r ( X ) = r ( M ) r ( X )
by (11) and X = F X , and
r [ X , Σ 1 X , W , Σ 1 W ] = r [ Σ 1 X , X , Z ] = r Σ 1 X Z X 0 0 r ( X ) = r ( N ) r ( X ) .
Hence, (a) is equivalent to the rank equality in (h).
By Definition 3(a), BLUE M ( K Θ ) = BLUE P ( K Θ ) hold definitely iff the coefficient matrix in (68) satisfies (45)
L Z [ X , Σ 1 X ] = [ K , 0 ] L [ Z X , Z Σ 1 X ] = [ K , 0 ] ( [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + + U [ Z X , Z Σ 1 Z ( Z X ) ] ) [ Z X , Z Σ 1 X ] = [ K , 0 ] ,
which is further reduced to
[ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + [ Z X , Z Σ 1 X ] = [ K , 0 ] .
Note that
R ( [ Z X , Z Σ 1 X ] ) R ( [ Z X , Z Σ 1 Z ( Z X ) ] ) ,
R ( [ K , 0 ] ) R ( [ Z X , Z Σ 1 Z ( Z X ) ] ) .
Then, (73) is simplified by (10), (11), (13), and (14) as
r ( [ K , 0 ] [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + [ Z X , Z Σ 1 X ] ) = r [ Z X , Z Σ 1 Z ( Z X ) ] [ Z X , Z Σ 1 X ] [ K , 0 ] [ K , 0 ] r [ Z X , Z Σ 1 Z ( Z X ) ] = r Z X Z Σ 1 Z ( Z X ) Z X Z Σ 1 X K 0 K 0 r [ Z X , Z Σ 1 ] = r Z X Z Σ 1 Z Z Σ 1 K 0 0 0 ( Z X ) 0 0 0 X r ( Z X ) r ( X ) r [ Z X , Z Σ 1 ] = r Z X 0 Z Σ 1 K 0 0 0 ( Z X ) 0 0 0 X r ( Z X ) r ( X ) r [ Z X , Z Σ 1 ] = r Z X Z Σ 1 K 0 0 X r ( X ) r [ Z X , Z Σ 1 ] = r Σ 1 X Z X 0 0 0 K 0 r ( X ) r [ X , Z , Σ 1 ] = r ( M ) r ( N ) .
Setting (74) equal to zero, we obtain that (b) is equivalent to the rank equality in (h).
By Definition 3(b), we see that BLUE M ( K Θ ) = BLUE N ( K Θ ) holds with probability 1 if and only if
E BLUE M ( K Θ ) BLUE N ( K Θ ) = 0 and Cov ( BLUE M ( K Θ ) BLUE N ( K Θ ) ) = 0 ;
BLUE M ( K Θ ) = BLUE P ( K Θ ) holds with probability 1 if and only if
E ( BLUE M ( K Θ ) BLUE P ( K Θ ) ) = 0 and Cov ( BLUE M ( K Θ ) BLUE P ( K Θ ) ) = 0 .
The first equalities in (75) and (76) hold naturally. Thus, (c), (d), and (e) are equivalent. Furthermore, the two equalities in (e) are equivalent to
( I m [ K , 0 ] [ X , Σ 1 X ] + I m [ K ^ , 0 ] [ W , Σ 1 W ] + ) ( Σ 2 Σ 1 ) = Σ 2 ( [ K , 0 ] [ X , Σ 1 X ] + Σ 1 [ K ^ , 0 ] [ W , Σ 1 W ] + Σ 1 ) = 0 ,
( I m [ K , 0 ] [ X , Σ 1 X ] + I m [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + Z ) ( Σ 2 Σ 1 ) = Σ 2 ( [ K , 0 ] [ X , Σ 1 X ] + Σ 1 [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + Z Σ 1 ) = 0 .
Notice that Σ 2 0 . Equations (77) and (78) are equivalent to
[ K , 0 ] [ X , Σ 1 X ] + Σ 1 [ K ^ , 0 ] [ W , Σ 1 W ] + Σ 1 = 0 ,
[ K , 0 ] [ X , Σ 1 X ] + Σ 1 [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + Z Σ 1 = 0 .
Applying (14) to the difference on the left-hand side of (79), and simplifying by elementary matrix operations, (11) and (13), we obtain
r ( [ K , 0 ] [ X , Σ 1 X ] + Σ 1 [ K , 0 , 0 ] [ W , Σ 1 W ] + Σ 1 ) = r [ X , Σ 1 X ] 0 Σ 1 0 [ X , Z , Σ 1 [ X , Z ] ] Σ 1 [ K , 0 ] [ K , 0 , 0 ] 0 r [ X , Σ 1 X ] r [ W , Σ 1 W ] = r X 0 0 0 0 Σ 1 X Σ 1 X X Z Σ 1 [ X , Z ] 0 K 0 K 0 0 0 r [ X , Σ 1 ] r [ W , Σ 1 ] = r X 0 0 0 0 Σ 1 0 Σ 1 X X Z 0 0 0 0 K 0 0 0 r [ X , Σ 1 ] r [ W , Σ 1 ] = r Σ 1 X X Z 0 K 0 r [ W , Σ 1 ] = r Σ 1 X Z X 0 0 0 K 0 r ( X ) r [ W , Σ 1 ] = r ( M ) r ( N )
by (13). Similarly, we can show that
r ( [ K , 0 ] [ X , Σ 1 X ] + Σ 1 [ K , 0 ] [ Z X , Z Σ 1 Z ( Z X ) ] + Z Σ 1 ) = r [ X , Σ 1 X ] 0 Σ 1 0 [ Z X , Z Σ 1 Z ( Z X ) ] Z Σ 1 [ K , 0 ] [ K , 0 ] 0 r [ X , Σ 1 X ] r [ Z X , Z Σ 1 Z ( Z X ) ] = r X 0 0 0 Σ 1 Z X Z Σ 1 X Z X Z Σ 1 Z ( Z X ) 0 K 0 K 0 0 r [ X , Σ 1 ] r [ Z X , Z Σ 1 ] = r X 0 0 0 Σ 1 0 Z Σ 1 X Z X Z Σ 1 Z ( Z X ) 0 0 0 K 0 0 r [ X , Σ 1 ] r [ Z X , Z Σ 1 ] = r Z Σ 1 X Z X Z Σ 1 Z ( Z X ) 0 K 0 r [ Z X , Z Σ 1 ] = r Z Σ 1 Z X Z Σ 1 Z 0 K 0 0 0 ( Z X ) X 0 0 r ( X ) r ( Z X ) r [ Z X , Z Σ 1 ] = r Z Σ 1 Z X 0 K X 0 r ( X ) r [ Z X , Z Σ 1 ] = r Σ 1 X Z X 0 0 0 K 0 r ( X ) r [ X , Z , Σ 1 ] = r ( M ) r ( N )
by (13). Setting the right-hand sides of (81) and (82) equal to zero, we obtain the equivalence of (e) and (h).
It follows from (48) and (61) that
Cov ( BLUE M ( K Θ ) ) Cov ( BLUE N ( K Θ ) ) = σ 2 Σ 2 [ K , 0 ] [ X , Σ 1 X ] + Σ 1 ( [ K , 0 ] [ X , Σ 1 X ] + ) σ 2 Σ 2 [ K ^ , 0 ] [ W , Σ 1 W ] + Σ 1 ( [ K ^ , 0 ] [ W , Σ 1 W ] + ) .
Hence,
r ( Cov ( BLUE M ( K Θ ) ) Cov ( BLUE N ( K Θ ) ) ) = r ( Σ 2 ) r ( [ K , 0 ] [ X , Σ 1 X ] + Σ 1 ( [ K , 0 ] [ X , Σ 1 X ] + ) [ K ^ , 0 ] [ W , Σ 1 W ] + Σ 1 ( [ K ^ , 0 ] [ W , Σ 1 W ] + ) ) ,
where a rank formula for the matrix difference in (84) is
r ( [ K , 0 ] [ X , Σ 1 X ] + Σ 1 ( [ K , 0 ] [ X , Σ 1 X ] + ) [ K ^ , 0 ] [ W , Σ 1 W ] + Σ 1 ( [ K ^ , 0 ] [ W , Σ 1 W ] + ) ) = r ( M ) r ( N ) ,
see [13]. Substituting (85) into (84) yields
r ( Cov ( BLUE M ( K Θ ) ) Cov ( BLUE N ( K Θ ) ) ) = r ( Σ 2 ) ( r ( M ) r ( N ) ) .
Similarly, we can obtain
r ( Cov ( BLUE M ( K Θ ) ) Cov ( BLUE P ( K Θ ) ) ) = r ( Σ 2 ) ( r ( M ) r ( N ) ) .
Setting the right-hand side equal to zero, we obtain the equivalences of (f), (g), and (h). □
Finally, we give a special case of Theorem 4 for K = X as follows.
Corollary 2.
Assume that the mean vector X Θ is estimable under (2), i.e., R ( X ) R ( Z ) = { 0 } holds. Then, the following statements are equivalent:
(a)
BLUE M ( X Θ ) = BLUE N ( X Θ ) holds definitely (with probability 1 ) .
(b)
BLUE M ( X Θ ) = BLUE P ( X Θ ) holds definitely (with probability 1 ) .
(c)
Cov ( BLUE M ( X Θ ) BLUE N ( X Θ ) ) = Cov ( BLUE M ( X Θ ) BLUE P ( X Θ ) ) = 0 .
(d)
Cov ( BLUE M ( X Θ ) ) = Cov ( BLUE N ( X Θ ) ) .
(e)
Cov ( BLUE M ( X Θ ) ) = Cov ( BLUE P ( X Θ ) ) .
(f)
r Σ 1 Z X 0 = r [ Σ 1 , X , Z ] .
Proof. 
Set K = X in Theorem 4(h) and, simplifying by (13), we obtain
r ( M ) = r Σ 1 X Z X 0 0 0 X 0 = r Σ 1 Z X 0 + r ( X ) , r ( N ) = r Σ 1 X Z X 0 0 = r [ Σ 1 , X , Z ] + r ( X ) .
Hence, r ( M ) = r ( N ) in Theorem 4(h) is reduced to the rank formula in (f). □

5. Conclusions

The comparison and equivalence analysis of statistical inference results under true and misspecified linear models can be proposed from the theoretical and applied point of view, as illustrated in this article, while there are many mathematical methods and techniques that are available to address the problems of such kind under more and less serious statistical assumptions. As a concrete topic in this regard, we reconsidered in the previous sections some equivalence analysis problems under a true multivariate linear model and its two misspecified forms. The key step of this study is to convert the equivalence analysis problems under the three models into certain algebraic matrix equalities or equations, and then to obtain the corresponding results and facts from the three true and misspecified models by means of some effective matrix analysis tools, including the matrix equation method and the matrix rank method. Because conclusions in the preceding sections are all presented through certain explicit expressions and equalities, we believe that the contributions in this article are easy to understand and can serve as a group of theoretical references in the statistical analysis of various subsequent problems regarding MGLMs. Because all the formulas and facts in the preceding theorems are represented in certain analytical expressions or formulas, they can easily be reduced to various specified conclusions when the model matrices and covariance matrix in (1) are given in certain prescribed formulations. For example, let
Cov ( Ψ ) = σ 2 I m Σ 1 , Cov ( Ψ ) = σ 2 Σ 2 I n , Cov ( Ψ ) = σ 2 I m n
in (1), respectively, which are regularly assumed in various concrete MGLMs.
We believe that the resultful studies on the equivalences of BLUPs/BLUEs provide significant advances to algebraical methodology in the statistical analysis of MGLMs, which will bring enough enabling methodological improvements and advances in the field of multivariate analysis. Finally, we propose a further problem on comparison and equivalence analysis of statistical inference results under the following two competing constrained MGLMs:
M : Y = X Θ + Ψ , A Θ = B , N : Y = X Θ + Z Γ + Ψ , A Θ = B ,
where A Θ = B is a consistent matrix equation for the unknown parameter matrix Θ .

Author Contributions

Conceptualization, B.J.; methodology, Y.T.; investigation, B.J. and Y.T.; writing original draft, B.J.; writing, review and editing, Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Shandong Provincial Natural Science Foundation #ZR2019MA065.

Acknowledgments

The authors are grateful to three referees for their helpful reports to an earlier version of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Markiewicz, A.; Puntanen, S. All about the ⊥ with its applications in the linear statistical models. Open Math. 2015, 13, 33–50. [Google Scholar] [CrossRef]
  2. Puntanen, S.; Styan, G.P.H.; Isotalo, J. Matrix Tricks for Linear Statistical Models; Our Personal Top Twenty; Springer: Berlin, Germany, 2011. [Google Scholar]
  3. Rao, C.R.; Mitra, S.K. Generalized Inverse of Matrices and Its Applications; Wiley: New York, NY, USA, 1971. [Google Scholar]
  4. Jiang, B.; Tian, Y. On equivalence of predictors/estimators under a multivariate general linear model with augmentation. J. Korean Stat. Soc. 2017, 46, 551–561. [Google Scholar] [CrossRef]
  5. Nel, D.G. Tests for equality of parameter matrices in two multivariate linear models. J. Multivar. Anal. 1997, 61, 29–37. [Google Scholar] [CrossRef] [Green Version]
  6. Gamage, J.; Ananda, M.M.A. An exact test for testing the equality of parameter matrices in two multivariate linear models. Linear Algebra Appl. 2006, 418, 882–885. [Google Scholar] [CrossRef] [Green Version]
  7. Isotalo, J.; Puntanen, S.; Styan, G.P.H. Effect of adding regressors on the equality of the OLSE and BLUE. Int. J. Stat. Sci. 2010, 6, 193–201. [Google Scholar]
  8. Jammalamadaka, S.R.; Sengupta, D. Changes in the general linear model: A unified approach. Linear Algebra Appl. 1999, 289, 225–242. [Google Scholar] [CrossRef] [Green Version]
  9. Jammalamadaka, S.R.; Sengupta, D. Inclusion and exclusion of data or parameters in the general linear model. Stat. Probab. Lett. 2007, 77, 1235–1247. [Google Scholar] [CrossRef] [Green Version]
  10. Gan, S.; Sun, Y.; Tian, Y. Equivalence of predictors under real and over-parameterized linear models. Commun. Stat. Theor. Meth. 2017, 46, 5368–5383. [Google Scholar] [CrossRef]
  11. Jun, S.J.; Pinkse, J. Adding regressors to obtain efficiency. Econom. Theory 2009, 25, 298–301. [Google Scholar] [CrossRef] [Green Version]
  12. Li, W.; Tian, Y.; Yuan, R. Statistical analysis of a linear regression model with restrictions and superfluous variables. J. Ind. Manag. Optim. 2023, 19, 3107–3127. [Google Scholar] [CrossRef]
  13. Lu, C.; Gan, S.; Tian, Y. Some remarks on general linear model with new regressors. Stat. Prob. Lett. 2015, 97, 16–24. [Google Scholar] [CrossRef]
  14. Magnus, J.R.; Durbin, J. Estimation of regression coefficients of interest when other regression coefficients are of no interest. Econometrica 1999, 67, 639–643. [Google Scholar] [CrossRef]
  15. Baksalary, J.K. A study of the equivalence between a Gauss–Markoff model and its augmentation by nuisance parameters. Statistics 1984, 15, 3–35. [Google Scholar]
  16. Bhimasankaram, P.; Jammalamadaka, S.R. Updates of statistics in a general linear model: A statistical interpretation and applications. Commun. Stat. Simul. Comput. 1994, 23, 789–801. [Google Scholar] [CrossRef]
  17. Haslett, S.J.; Puntanen, S. Effect of adding regressors on the equality of the BLUEs under two linear models. J. Stat. Plann. Inference 2010, 140, 104–110. [Google Scholar] [CrossRef]
  18. Marsaglia, G.; Styan, G.P.H. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra 1974, 2, 269–292. [Google Scholar] [CrossRef]
  19. Tian, Y. Matrix rank and inertia formulas in the analysis of general linear models. Open Math. 2017, 15, 126–150. [Google Scholar] [CrossRef]
  20. Penrose, R. A generalized inverse for matrices. Proc. Camb. Phil. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef] [Green Version]
  21. Tian, Y. A new derivation of BLUPs under random-effects model. Metrika 2015, 78, 905–918. [Google Scholar] [CrossRef]
  22. Tian, Y. A matrix handling of predictions under a general linear random-effects model with new observations. Electron. J. Linear Algebra 2015, 29, 30–45. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, B.; Tian, Y. Equivalence Analysis of Statistical Inference Results under True and Misspecified Multivariate Linear Models. Mathematics 2023, 11, 182. https://doi.org/10.3390/math11010182

AMA Style

Jiang B, Tian Y. Equivalence Analysis of Statistical Inference Results under True and Misspecified Multivariate Linear Models. Mathematics. 2023; 11(1):182. https://doi.org/10.3390/math11010182

Chicago/Turabian Style

Jiang, Bo, and Yongge Tian. 2023. "Equivalence Analysis of Statistical Inference Results under True and Misspecified Multivariate Linear Models" Mathematics 11, no. 1: 182. https://doi.org/10.3390/math11010182

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop