Next Article in Journal
Parametric Inference in Biological Systems in a Random Environment
Previous Article in Journal
Mathematical Modeling of Fractals via Proximal F-Iterated Function Systems
Previous Article in Special Issue
Transient and Steady-State Analysis of an M/PH2/1 Queue with Catastrophes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Optimal Prediction Under Stochastically Restricted Linear Model and Its Subsample Models

Department of Econometrics, Sakarya University, Sakarya 54187, Turkey
Axioms 2024, 13(12), 882; https://doi.org/10.3390/axioms13120882
Submission received: 25 November 2024 / Revised: 16 December 2024 / Accepted: 17 December 2024 / Published: 19 December 2024
(This article belongs to the Special Issue Stochastic and Statistical Analysis in Natural Sciences)

Abstract

:
This paper provides a study on optimal prediction problems in a linear model and its subsample models with linear stochastic restrictions, using matrix theory for precise analytical solutions. It focuses on deriving analytical expressions using block matrix inertia and rank methods to determine which of the best linear unbiased predictors (BLUPs) of a general vector of unknown parameters is superior to others under a stochastically restricted linear model and its subsample models. Additionally, this study examines the comparative results of the best linear unbiased estimators of unknown parameters. The comparisons in the study are based on the mean squared error matrix (MSEM) criterion. Finally, a numerical example is given to illustrate the theoretical results.
MSC:
62J05; 62H12; 62F30; 15A03

1. Introduction

Linear regression models are one of the cornerstones of statistical analysis, playing a crucial role in data examination and the formulation of novel statistical methodologies. These models are widely used in both theoretical and empirical studies across a wide range of scientific disciplines. In statistical research, it is common to encounter pertinent extraneous information that often serves as prior knowledge for the models, which can take the form of exact or stochastic restrictions on unknown parameters within linear regression models; see [1] for further details. When such extraneous information stems from theoretical considerations or established knowledge concerning the relationships among unknown parameters, exact restrictions emerge. On the other hand, linear stochastic restrictions emerge when prior information is gleaned from previous studies or established associations with relevant research endeavors. These restrictions are integrated into the assumptions of the models. In such instances, linear models (LMs) become restricted LMs (RLMs) through the incorporation of specific restrictions on unknown parameters into their assumptions.
The present study focuses on a stochastic RLM (SRLM) with general assumptions. By dividing the matrices and vectors into two parts in the model and also in the restriction, two subsample models of this model are obtained with stochastic restrictions. This partitioning facilitates a more detailed exploration of the relationships within the model, enabling a comprehensive understanding of its structure and properties. After converting the models to their implicitly stochastically restricted forms, various equalities and inequalities are established for comparisons between the best linear unbiased predictors (BLUPs) for a vector covering all unknown parameters in the models. The mean squared error matrix (MSEM) method is used for comparisons. This approach provides a quantitative basis for evaluating predictor performance, ensuring a rigorous assessment of their relative efficiency. Within the context of comparisons, complicated matrix operations are frequently encountered. To streamline these operations, block matrix inertia and rank methodology are used, which serve to simplify these complicated expressions. This methodology has proven instrumental in characterizing the covariance matrices for predictors or estimators under LMs, offering a systematic approach for evaluating and comparing statistical properties across different model structures. Moreover, the comparison results obtained for the models can be reduced to results corresponding to special cases of the vector encompassing all unknown vectors in the model. This reduction enables a broader application of the findings. Thus, the results for the best linear unbiased estimator (BLUE) of the unknown parameters in the model are also obtained. In the statistical literature, LMs with restrictions on unknown parameters have been extensively studied from a variety of approaches. One may refer to the work [1,2,3,4,5,6,7,8,9], among others, where an LM with a linear stochastic restriction on the unknown parameters is used. Refs. [10,11,12,13,14] and also the references therein can be referred to as recent studies on LMs with an exact restriction on unknown parameters. A series of recent studies given as follows [15,16,17,18], along with additional works [19,20,21], have extensively utilized the inertia and rank methodology of block matrices. This methodology represents a significant contribution to both mathematics and statistics, now made more accessible through various rank and inertia formulas. These advancements have led to hundreds of results, greatly deepening our theoretical understanding of LMs.
Some pieces of notation will now be introduced to clarify the expressions used in this article. To begin with, it should be noted that this article is exclusively focused on real matrices. The analysis, methods, and results presented are all constrained to real-valued matrices, excluding any involvement of complex or other types of matrices. The column space, the rank, the transpose, and the Moore–Penrose generalized inverse of a real matrix X are denoted by C ( X ) , r ( X ) , X , and X + , respectively. A real n × n matrix X is a positive semi-definite (psd) matrix if a X a 0 for all real n dimensional vector a . X Υ means that the difference X Υ is psd in the L o ¨ wner partial ordering for two symmetric real matrices X and Υ of the same size. A well-established property, the L o ¨ wner partial ordering offers significant strength and utility when comparing complex Hermitian (or real symmetric) matrices. For more on its applications in statistical analysis, see, e.g., [22]. The inertia of a symmetric real matrix X is the triple I n ( X ) = { i + ( X ) , i 0 ( X ) , i ( X ) } , in which i + ( X ) , i 0 ( X ) , and i ( X ) stand for the number of positive, zero, and negative eigenvalues of X counted with multiplicities, respectively. Additionally, the symbols i ± ( X ) and i ( X ) are used to collectively denote both the positive and negative inertias of X . The symbol X is used for the orthogonal projector matrix I m X X + associated with a real m × n matrix X , where I m is the identity matrix of order m. E ( a ) denotes the expectation vector of a random vector a , while cov ( a , b ) represents the covariance matrix between two random vectors a and b . When a = b , D ( a ) is used to denote the covariance matrix of a with itself, i.e., D ( a ) = cov ( a , a ) , representing the dispersion matrix of a .
This paper is structured as follows. First, an SRLM is introduced with its two subsample models and their assumptions. This introduction is followed by a discussion of their implicitly stochastically restricted forms as well as a discussion of parameter prediction and estimation, and some fundamental results on BLUP are given. Then, several comparison results are established for the MSEM of any predictor and the BLUP of the unified vector of all unknown parameters in the SRLM. The theoretical results are followed by an illustration using a real data set. In Appendix A, first some fundamental results on matrix inertia and rank methodology are introduced, and then the proof of the main result is presented.

2. Some Preliminary Results

2.1. Linear Models with Stochastic Restriction

An SRLM can be defined by
M : y = X α + ε with stochastic restriction R α + e = r , E ( ε ) = 0 , D ( ε ) = σ 2 Γ , E ( e ) = 0 , D ( e ) = σ 2 Λ , cov ( ε , e ) = σ 2 Φ ,
where y is an n × 1 vector of responses; X and R are n × k and m × k known matrices of arbitrary ranks, respectively; α is a k × 1 unknown parameter vector; ε and e are n × 1 and m × 1 vectors of random errors, respectively; r is an m × 1 known vector; Γ , Λ , and Φ are singular known matrices; and σ 2 is a positive unknown scalar. We can divide the matrices and vectors in M as
M : y 1 y 2 = X 1 X 2 α + ε 1 ε 2 , R 1 R 2 α + e 1 e 2 = r 1 r 2
and accordingly, Γ , Λ , and Φ can be written in block partitioned form: Γ = ( Γ i j ) , Λ = ( Λ i j ) , and Φ = ( Φ i j ) , i , j = 1 , 2 . Then, two subsample models of M can be written as
M i : y i = X i α + ε i with stochastic restriction R i α + e i = r i ,
where y i and r i are n i × 1 and m i × 1 vectors, respectively, with n 1 + n 2 = n and m 1 + m 2 = m , i = 1 , 2 . The models in (1) and (3) can merge with their stochastic restrictions into the following combined forms:
Z : z = Z α + τ , z = y r , Z = X R , τ = ε e ,
and
Z i : z i = Z i α + τ i , z i = y i r i , Z i = X i R i , τ i = ε i e i ,
where D ( τ ) = σ 2 Υ and D ( τ i ) = σ 2 Υ i . The models Z and Z i in (4) and (5) are implicitly stochastically restricted forms of the models M and M i in (1) and (3), respectively.
To produce conclusions on predictors and estimators of all unknown vectors under the models Z and Z i , the following general vector
υ i = K α + L i τ i , or equivalently , υ i = K α + L i T i τ with D ( υ i ) = σ 2 L i Υ i L i = σ 2 L i T i Υ T i L i , cov ( υ i , z i ) = σ 2 L i Υ i , and cov ( υ i , z ) = σ 2 L i T i Υ
can be considered, where K and L i are t × k and t × ( n i + m i ) given matrices of arbitrary ranks, respectively, and
T i = N i 0 0 M i with N 1 = I n 1 , 0 , N 2 = 0 , I n 2 , M 1 = I m 1 , 0 , and M 2 = 0 , I m 2 .
We note that the model M i is the transformed version of M . It is obtained from pre-multiplying the model M by the matrix N i . Moreover, the model Z i is the transformed version of Z by the transformation matrix T i [16,17,23]. Additionally, in this study, the model Z is assumed to be consistent, i.e., z C Z , Υ holds with probability 1; see, e.g., [24]. In this case, its transformed model Z i is also consistent.

2.2. Predictions Under SRLM

The concepts related to the predictability of any vector are discussed in this section. If a vector is predictable within a LM, this implies that the vector can be predicted using the available data. The predictability condition of υ i in Z i is given as follows.
υ i is predictable under Z i C ( K ) C ( Z i ) .
This requirement also corresponds to the estimability condition of vector K α , i.e,
K α is estimable under Z i C ( K ) C ( Z i ) .
This equivalence indicates that if the vector K α is estimable under Z i , it can be estimated based on the available data. The classical concepts and definitions of predictability were introduced by [25], while further details regarding the estimability of any vector can be found in the work of [26]. Note that if υ i is predictable under Z i , then it is predictable under Z .
Suppose that υ i is predictable under considered models. F i z is said to be the BLUP of υ i under Z and is represented by F i z = υ ˜ i ( Z ) if there exists F i z such that D ( F i z υ i ) = min s . t . E ( F i z υ i ) = 0 holds in the L o ¨ wner sense; see, e.g., [22]. The well-known fundamental expressions for the BLUP under Z are given below.
F i z = υ ˜ i ( Z ) F i Z , Υ Z = K , L i T i Υ Z ,
υ ˜ i ( Z ) = F i z = K , L i T i Υ Z Z , Υ Z + + U i Z , Υ Z z ,
and
D ( υ i υ ˜ i ( Z ) ) = σ 2 K , L i T i Υ Z W + L i T i Υ × K , L i T i Υ Z W + L i T i ,
where U i is an arbitrary t × ( n + m ) real matrix and W = Z , Υ Z ; for details, see, e.g., [22,27,28]. Further, the well-known results on the BLUE of K α under Z , represented by K α ^ ( Z ) , is given as
K α ^ ( Z ) = K , 0 Z , Υ Z + + U Z , Υ Z z
and
D ( K α ^ ( Z ) ) = σ 2 K , 0 Z , Υ Z + Υ K , 0 Z , Υ Z + ,
where U is an arbitrary t × ( n + m ) real matrix; for details, see, e.g., [22,24]. Using a similar method as the one used above, the following equalities are written for the BLUP and the BLUE under Z i .
υ ˜ i ( Z i ) = K , L i Υ i Z i Z i , Υ i Z i + + U i Z i , Υ i Z i z i ,
D ( υ i υ ˜ i ( Z i ) ) = σ 2 K , L i Υ i Z i W i + L i Υ i × K , L i Υ i Z i W i + L i ,
and
K α ^ ( Z i ) = K , 0 Z i , Υ i Z i + + U i Z i , Υ i Z i z i ,
D ( K α ^ ( Z i ) ) = σ 2 K , 0 W i + Υ i K , 0 W i + ,
where U i is an arbitrary t × ( n i + m i ) real matrix and W i = Z i , Υ i Z i .

3. Comparisons Under SRLM

Before giving the comparison results and related conclusions between the BLUPs and the BLUEs under the models, we recall the concepts of the MSEM of any predictor or estimator.
Let a ˜ be any predictor or estimator for vector a in a LM. The MSEM of a ˜ is defined as
MSEM ( a ˜ ) = E ( a ˜ a ) ( a ˜ a ) .
Let a ˜ 1 and a ˜ 2 be two given predictors or estimators for a in a LM. The MSEM criterion for a ˜ 1 and a ˜ 2 is expressed as follows; see, e.g., [29].
MSEM ( a ˜ 1 ) MSEM ( a ˜ 2 ) a ˜ 2 is superior to a ˜ 1 .
We now present the main result concerning the comparison of MSEMs for the BLUPs of unknown vectors under considered models. Following this, we derive several consequences that correspond to specific special cases.
Theorem 1. 
Let υ ˜ i Z i and υ ˜ i Z be the BLUPs for predictable υ i in (6) under Z and Z i , respectively. We define the following:
G = Υ 0 Z 0 Υ T i L i 0 Υ i 0 Z i Υ i L i Z 0 0 0 K 0 Z i 0 0 K L i T i Υ L i Υ i K K 0 .
Then,
(a) 
υ ˜ i ( Z i ) is superior to υ ˜ i ( Z ) according to the MSEM criterion, i.e.,
MSEM ( υ ˜ i ( Z ) ) MSEM ( υ ˜ i ( Z i ) ) i ( G ) = r ( Z ) + r Z i , Υ i .
(b) 
υ ˜ i ( Z ) is superior to υ ˜ i ( Z i ) according to the MSEM criterion, i.e.,
MSEM ( υ ˜ i ( Z i ) ) MSEM ( υ ˜ i ( Z ) ) i + ( G ) = r ( Z i ) + r Z , Υ .
(c) 
MSEM ( υ ˜ ( Z i ) ) = MSEM ( υ ˜ i ( Z ) ) r ( G ) = r Z , Υ + r ( Z i ) + r Z i , Υ i + r ( Z ) .
As is well known, the MSEM is a fundamental tool widely used for assessing and comparing the superiority of two predictors. The dispersion matrix of the BLUP is equivalent to its MSEM; see [3]. Within the framework of the SRLM, comparing the BLUPs of a unified form of all unknown parameters using the MSEM criterion allows the determination of the optimal BLUP among those being compared. The results obtained in Theorem 1 include cases demonstrating the superiority of the BLUPs of the unified form of all common unknown parameters under the SRLM and its subsample model. These comparisons are equivalently expressed through inertias and ranks, reducing the process to merely comparing numerical values.
In the vector υ i given in (6), when L i = 0 , this vector is expressed as K α . In the following corollary, the matrix corresponding to this case is presented, which corresponds with the matrix described in Theorem 1. Thus, the results expressed through the inertia and rank equalities in Theorem 1 will be reduced to the results demonstrating the superiority of the BLUEs of K α under the SRLM and its subsample model, using the matrix provided in the following result.
Corollary 1. 
Let φ ^ ( Z ) and φ ^ ( Z i ) be the BLUE for estimable K α under Z and Z i , respectively. The matrix G in Theorem 1 is written as
Υ 0 Z 0 0 0 Υ i 0 Z i 0 Z 0 0 0 K 0 Z i 0 0 K 0 0 K K 0 .
Then, the items 1–3 in Theorem 1 are similarly expressed for the BLUEs φ ^ Z and φ ^ Z i .
Following this, we now consider an additional case of the BLUEs under the models. In the vector υ i given in (6), when L i = 0 and K = I k , this vector is expressed as α . In the following result, the matrix and corresponding results for this case are presented, which align with the matrix and results described in Theorem 1.
Corollary 2. 
Let α ^ ( Z ) and α ^ ( Z i ) be the BLUE for estimable α under Z and Z i , respectively. The matrix G in Theorem 1 is reduced to the matrix
Υ 0 Z 0 Υ i Z i Z Z i 0 .
Then, by using the estimability condition of α, which is expressed as r ( Z ) = r ( Z i ) = k , in the models, the items 1-3 in Theorem 1 are expressed as follows.
(a) 
α ^ ( Z i ) is superior to α ^ ( Z ) according to the MSEM criterion, i.e.,
MSEM ( α ^ ( Z ) ) MSEM ( α ^ ( Z i ) ) i ( G ) = r Z i , Υ i .
(b) 
α ^ ( Z ) is superior to α ^ ( Z i ) according to the MSEM criterion, i.e.,
MSEM ( α ^ ( Z i ) ) MSEM ( α ^ ( Z ) ) i + ( G ) = r Z , Υ .
(c) 
MSEM ( α ^ ( Z i ) ) = MSEM ( α ^ ( Z ) ) r ( G ) = r Z , Υ + r Z i , Υ i .
The results obtained in this study can be reduced to the corresponding results for the LM with exact restrictions and the LM without restrictions, i.e., the unconstrained model. These are expressed in the following remarks.
Remark 1. 
Consider the LM in (1) with a linear exact restriction, i.e.,
y = X α + ε with R α = r and D ( ε ) = σ 2 Γ ,
and similar divisions as in (2). Then, implicitly restricted forms of the models are expressed as
z = Z α + τ , z = y r , Z = X R , τ = ε 0 ,
and
z i = Z i α + τ i , z i = y i r i , Z i = X i R i , τ i = ε i 0 ,
where D ( τ ) = σ 2 Γ 0 0 0 : = σ 2 Υ ˜ and D ( τ ) = σ 2 Γ i i 0 0 0 : = σ 2 Υ ˜ i by setting Υ ˜ and Υ ˜ i instead of Y and Υ i in our results, respectively.
Remark 2. 
Consider the LM in (1) without any restriction, i.e.,
y = X α + ε with D ( ε ) = σ 2 Γ ,
and similar divisions as in (2) by setting Γ 0 0 0 instead of Y Γ i i 0 0 0 instead of Υ i , X 0 instead of Z , X i 0 instead of Z i , y 0 instead of z , and y i 0 instead of z i in the results.
In summary, my contribution to the literature is a theoretically comprehensive and significant search for the comparison results between BLUPs/BLUEs under an SRLM and its stochastically restricted submodels. In other words, as stated in Remarks 1 and 2, the results in this study are an expanded version of some of the results in the literature given for both an LM with an exact linear restriction and unconstrained LM results within the relevant framework.

4. Numerical Examples

The data set taken from [30] is now used to illustrate the theoretical findings. This data set contains observations from 21 days of operation of a plant for the oxidation of ammonia, which is used for producing nitric acid. The response variable is the stack loss defined as the percentage of the ingoing ammonia that escapes unabsorbed. The explanatory variables are air flow, cooling water inlet temperature in °C, and acid concentration in percentage.
First of all, the above numerical example is used by considering
Γ = I 21 , Λ = I 2 , and Φ = 0 with σ 2 = 0 , 12
to illustrate the comparison results obtained in Section 3. In this case, the comparisons of the BLUE for the vector Z 1 α under Z and Z 1 are presented as follows. Using the NumPy library in Python and setting the above considerations and findings in the matrix G , values and numbers of the eigenvalues of the matrix G are presented in Table 1 by setting the stochastic restriction R α + e = r , which is arbitrarily chosen with
R = 1 0 1 1 0 1 = R 1 R 2
and added to the model.
Thus, from Table 1, it is seen that i + ( G ) = 26 and i ( G ) = 19 , and thereby, r ( G ) = 45 . It is also obtained that r Z , Υ = 23 and r ( Z 1 ) = 3 . Therefore,
i + ( G ) = r Z , Υ + r ( Z 1 ) = 26 MSEM ( Z 1 α ^ ( Z ) ) MSEM ( Z 1 α ^ ( Z 1 ) )
holds. Since r ( Z ) = 3 and r Z 1 , Υ 1 = 13 , i.e.,
i ( G ) r Z 1 , Υ 1 + r ( Z ) ,
MSEM ( Z 1 α ^ ( Z ) ) MSEM ( Z 1 α ^ ( Z 1 ) ) is not provided.
Also, direct calculations show that
MSEM ( Z 1 α ^ ( Z ) ) = + 0.034 + 0.035 + 0.026 + 0.010 + 0.008 + 0.009 + 0.009 + 0.009 + 0.005 + 0.001 0.001 0.002 + 0.003 + 0.035 + 0.035 + 0.026 + 0.010 + 0.008 + 0.009 + 0.010 + 0.010 + 0.005 + 0.001 0.001 0.002 + 0.003 + 0.026 + 0.026 + 0.021 + 0.007 + 0.007 + 0.007 + 0.007 + 0.007 + 0.003 + 0.004 + 0.003 + 0.003 + 0.002 + 0.010 + 0.010 + 0.007 + 0.015 + 0.008 + 0.012 + 0.015 + 0.015 + 0.015 0.004 0.004 0.008 0.001 + 0.008 + 0.008 + 0.007 + 0.008 + 0.006 + 0.007 + 0.008 + 0.008 + 0.007 + 0.004 + 0.004 + 0.003 + 0 + 0.009 + 0.009 + 0.007 + 0.012 + 0.007 + 0.009 + 0.011 + 0.011 + 0.011 + 0 + 0 0.002 + 0 + 0.009 + 0.010 + 0.007 + 0.015 + 0.008 + 0.011 + 0.015 + 0.015 + 0.015 0.003 0.004 0.007 0.001 + 0.009 + 0.010 + 0.007 + 0.015 + 0.008 + 0.011 + 0.015 + 0.015 + 0.015 0.003 0.004 0.007 0.001 + 0.005 + 0.005 + 0.003 + 0.015 + 0.007 + 0.011 + 0.015 + 0.015 + 0.016 0.003 0.003 0.007 0.001 + 0.001 + 0.001 + 0.004 0.004 + 0.004 + 0 0.003 0.003 0.003 + 0.015 + 0.015 + 0.019 + 0 0.001 0.001 + 0.003 0.004 + 0.004 + 0 0.004 0.004 0.003 + 0.015 + 0.016 + 0.019 + 0 0.002 0.002 + 0.003 0.008 + 0.003 0.002 0.007 0.007 0.007 + 0.019 + 0.019 + 0.025 + 0 + 0.003 + 0.003 + 0.002 0.001 + 0 + 0 0.001 0.001 0.001 + 0 + 0 + 0 + 0
and
MSEM ( Z 1 α ^ ( Z 1 ) ) = + 0.046 + 0.046 + 0.034 + 0.002 + 0.004 + 0.003 + 0.001 + 0.001 0.007 0.002 0.004 0.003 + 0.005 + 0.046 + 0.046 + 0.034 + 0.002 + 0.004 + 0.003 + 0.001 + 0.001 0.007 0.002 0.004 0.003 + 0.005 + 0.034 + 0.034 + 0.027 + 0.002 + 0.006 + 0.004 + 0.001 + 0.001 0.005 + 0.006 + 0.005 + 0.007 + 0.004 + 0.002 + 0.002 + 0.002 + 0.023 + 0.012 + 0.017 + 0.023 + 0.023 + 0.025 0.002 0.001 0.007 0.002 + 0.004 + 0.004 + 0.006 + 0.012 + 0.011 + 0.012 + 0.012 + 0.012 + 0.014 + 0.011 + 0.012 + 0.011 0.001 + 0.003 + 0.003 + 0.004 + 0.017 + 0.012 + 0.014 + 0.018 + 0.018 + 0.020 + 0.005 + 0.005 + 0.002 0.002 + 0.001 + 0.001 + 0.001 + 0.023 + 0.012 + 0.018 + 0.023 + 0.023 + 0.026 0.001 + 0 0.006 0.003 + 0.001 + 0.001 + 0.001 + 0.023 + 0.012 + 0.018 + 0.023 + 0.023 + 0.026 0.001 + 0 0.006 0.003 0.007 0.007 0.005 + 0.025 + 0.014 + 0.020 + 0.026 + 0.026 + 0.031 + 0.001 + 0.001 0.005 0.004 0.002 0.002 + 0.006 0.002 + 0.011 + 0.005 0.001 0.001 + 0.001 + 0.031 + 0.033 + 0.039 + 0 0.004 0.004 + 0.005 0.001 + 0.012 + 0.005 + 0 + 0 + 0.001 + 0.033 + 0.034 + 0.041 0.001 0.003 0.003 + 0.007 0.007 + 0.011 + 0.002 0.006 0.006 0.005 + 0.039 + 0.041 + 0.049 + 0 + 0.005 + 0.005 + 0.004 0.002 0.001 0.002 0.003 0.003 0.004 + 0 0.001 + 0 + 0.001 .
It is also noted that the difference MSEM ( Z 1 α ^ ( Z 1 ) ) MSEM ( Z 1 α ^ ( Z ) ) is psd because the value of different eigenvalues of this difference is 0, 0.003, 0.054, and 0.081, respectively. Then, we observe that the inequality MSEM ( Z 1 α ^ ( Z ) ) MSEM ( Z 1 α ^ ( Z 1 ) ) holds.
The above numerical example is now discussed for a more general case. We add further calculations to illustrate the comparison results by assuming that the dispersion matrices are singular. For this, we take the elements corresponding to the 9th and 20th rows as 0 in the dispersion matrices we considered above. In this case, the comparisons of the BLUE for the vector Z 1 α under Z and Z 1 are presented as follows. Using the NumPy library in Python and setting the above considerations and findings in the matrix G , values and numbers of the eigenvalues of the matrix G are presented in Table 2.
Thus, from Table 2, it is seen that i + ( G ) = 26 and i ( G ) = 18 , and thereby, r ( G ) = 44 . It is also obtained that r Z , Υ = 23 and r ( Z 1 ) = 3 . Therefore,
i + ( G ) = r Z , Υ + r ( Z 1 ) = 26 MSEM ( Z 1 α ^ ( Z ) ) MSEM ( Z 1 α ^ ( Z 1 ) )
holds. Since r ( Z ) = 3 and r Z 1 , Υ 1 = 13 , i.e.,
i ( G ) r Z 1 , Υ 1 + r ( Z ) ,
MSEM ( Z 1 α ^ ( Z ) ) MSEM ( Z 1 α ^ ( Z 1 ) ) is not provided.
Also, direct calculations show that
MSEM ( Z 1 α ^ ( Z ) ) = + 0.032 + 0.032 + 0.026 + 0.006 + 0.007 + 0.007 + 0.005 + 0.005 + 0 + 0.005 + 0.004 + 0.005 + 0.003 + 0.032 + 0.032 + 0.026 + 0.006 + 0.007 + 0.007 + 0.005 + 0.005 + 0 + 0.005 + 0.004 + 0.005 + 0.003 + 0.026 + 0.026 + 0.020 + 0.004 + 0.006 + 0.005 + 0.004 + 0.004 + 0 + 0.004 + 0.003 + 0.004 + 0.002 + 0.006 + 0.006 + 0.004 + 0.001 + 0.001 + 0.001 + 0.001 + 0.001 + 0 + 0.001 + 0.001 + 0.001 + 0.001 + 0.007 + 0.007 + 0.006 + 0.001 + 0.002 + 0.002 + 0.001 + 0.001 + 0 + 0.001 + 0.001 + 0.001 + 0.001 + 0.007 + 0.007 + 0.005 + 0.001 + 0.002 + 0.001 + 0.001 + 0.001 + 0 + 0.001 + 0.001 + 0.001 + 0.001 + 0.005 + 0.005 + 0.004 + 0.001 + 0.001 + 0.001 + 0.001 + 0.001 + 0 + 0.001 + 0.001 + 0.001 + 0 + 0.005 + 0.005 + 0.004 + 0.001 + 0.001 + 0.001 + 0.001 + 0.001 + 0 + 0.001 + 0.001 + 0.001 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0.005 + 0.005 + 0.004 + 0.001 + 0.001 + 0.001 + 0.001 + 0.001 + 0 + 0.001 + 0.001 + 0.001 + 0 + 0.004 + 0.004 + 0.003 + 0.001 + 0.001 + 0.001 + 0.001 + 0.001 + 0 + 0.001 + 0.001 + 0.001 + 0 + 0.005 + 0.005 + 0.004 + 0.001 + 0.001 + 0.001 + 0.001 + 0.001 + 0 + 0.001 + 0.001 + 0.001 + 0 + 0.003 + 0.003 + 0.002 + 0.001 + 0.001 + 0.001 + 0 + 0 + 0 + 0 + 0 + 0 + 0
and
MSEM ( Z 1 α ^ ( Z 1 ) ) = + 0.044 + 0.044 + 0.033 + 0.008 + 0.007 + 0.008 + 0.007 + 0.007 + 0 0.002 0.004 0.004 + 0.004 + 0.044 + 0.044 + 0.033 + 0.008 + 0.007 + 0.008 + 0.007 + 0.007 + 0 0.002 0.004 0.004 + 0.004 + 0.033 + 0.033 + 0.026 + 0.006 + 0.008 + 0.007 + 0.005 + 0.005 + 0 + 0.006 + 0.005 + 0.006 + 0.003 + 0.008 + 0.008 + 0.006 + 0.002 + 0.001 + 0.001 + 0.001 + 0.001 + 0 0.002 0.003 0.003 + 0.001 + 0.007 + 0.007 + 0.008 + 0.001 + 0.005 + 0.003 + 0.001 + 0.001 + 0 + 0.011 + 0.011 + 0.013 + 0.001 + 0.008 + 0.008 + 0.007 + 0.001 + 0.003 + 0.002 + 0.001 + 0.001 + 0 + 0.004 + 0.004 + 0.005 + 0.001 + 0.007 + 0.007 + 0.005 + 0.001 + 0.001 + 0.001 + 0.001 + 0.001 + 0 0.001 0.002 0.002 + 0.001 + 0.007 + 0.007 + 0.005 + 0.001 + 0.001 + 0.001 + 0.001 + 0.001 + 0 0.001 0.002 0.002 + 0.001 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 0.002 0.002 + 0.006 0.002 + 0.011 + 0.004 0.001 0.001 + 0 + 0.031 + 0.033 + 0.039 + 0 0.004 0.004 + 0.005 0.003 + 0.011 + 0.004 0.002 0.002 + 0 + 0.033 + 0.034 + 0.041 0.001 0.004 0.004 + 0.006 0.003 + 0.013 + 0.005 0.002 0.002 + 0 + 0.039 + 0.041 + 0.049 0.001 + 0.004 + 0.004 + 0.003 + 0.001 + 0.001 + 0.001 + 0.001 + 0.001 + 0 + 0 0.001 0.001 + 0 .
It is also noted that the difference MSEM ( Z 1 α ^ ( Z 1 ) ) MSEM ( Z 1 α ^ ( Z ) ) is psd because the value of different eigenvalues of this difference is 0, 0.027, and 0.012, respectively. Then, we observe that the inequality MSEM ( Z 1 α ^ ( Z ) ) MSEM ( Z 1 α ^ ( Z 1 ) ) holds.

5. Conclusions

In this study, I have presented a detailed analysis of comparing the BLUPs under SRLMs and their subsample models. The primary goal of the research was to explore the comparative performance of the BLUPs, employing the MSEM approach for evaluation, alongside matrix inertia and rank methodologies. This approach allowed for a systematic and efficient comparison of the BLUPs, providing valuable insights into their performance under various restrictions in the models.
It was demonstrated that the comparison of BLUPs using the MSEM criterion allowed for determining the optimality of predictors across different models. The block matrix inertia and rank methodology were effectively employed to simplify the complicated matrix operations involved, leading to a systematic and efficient way of comparing statistical properties in LMs. These methods facilitated a reduction of results to special cases, making the findings applicable to a wider range of model structures. Through matrix inertia and rank methodology, we provided rigorous comparisons, making significant contributions to the existing literature on restricted and unconstrained LMs. The approaches used in the study of the problem of comparing BLUPs for determining optimality predictors under considered models reduce to the problem of comparing quantities, since the inertia and rank of a matrix are simple quantities to understand and calculate.
The results obtained in the study were further extended to demonstrate the superiority of the BLUPs under the SRLM and its subsample models. By utilizing the predictability condition of the unknown parameter vectors, I presented a set of inequalities and equalities that govern the comparison of BLUPs. Additionally, I explored the connection between the MSEM results and their implications for the BLUEs, providing a deeper understanding of the role of restrictions in enhancing model predictions. It was also shown that the results can be reduced to linear regression models with exact restrictions as well as those without any restrictions on unknown parameters. Theoretical results were carefully derived and supported by numerical examples that demonstrate the accuracy and applicability of these results. Real-world data sets were analyzed to illustrate the practical application of the findings. These empirical analyses not only confirmed the validity of the theoretical outcomes but also showcased the robustness and utility of the proposed methodology in real-world settings.
The findings presented in this paper are significant for both theoretical and empirical applications in statistical analysis. The use of stochastic restrictions within linear regression models has wide-reaching implications for many scientific disciplines. Moreover, the methodological advancements introduced, particularly in the areas of matrix inertia and rank, offer valuable tools for future research.
In conclusion, the work presented in this study contributes to ongoing efforts to improve the accuracy and efficiency of statistical models, particularly in the context of linear regression with restrictions. The results not only enhance our theoretical understanding but also provide practical tools for analyzing real-world data through the application of restricted linear models.
In future work, the methodology presented in this study can be extended to more complex model structures, such as those involving nonlinear or high-dimensional data. Furthermore, further exploration of stochastic restrictions in dynamic settings or under different distributional assumptions could provide new insights into the optimality of predictors. Investigating the integration of these methods with machine learning techniques may also open new avenues for improving prediction accuracy in modern data-driven applications.

Funding

This research was funded by Sakarya University.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

Abbreviations and their full forms used throughout the article.
AbbreviationsFull Terms
LMLinear Model
RLMRestricted Linear Model
SRLMStochastically Restricted Linear Model
BLUPBest Linear Unbiased Predictor
BLUEBest Linear Unbiased Estimator
MSEMMean Squared Error Matrix
psdPositive Semi-Definite

Appendix A

The following lemmas are required in the above sections; see, ref. [31]. We use them as tools to derive various inequalities and equalities for the comparison results.
Lemma A1. 
Let Φ 1 and Φ 2 be m × n or m × m symmetric matrices. Then,
i ( Φ 1 Φ 2 ) = 0 Φ 1 Φ 2
and
i + ( Φ 1 Φ 2 ) = 0 Φ 1 Φ 2 .
Lemma A2. 
Let Φ 1 and Φ 2 be m × m and n × n symmetric matrices, respectively, Ω be an m × n matrix, and k be a scalar quantity. Then,
r ( Φ 1 ) = i + ( Φ 1 ) + i ( Φ 1 ) .
i ± ( k Φ 1 ) = i ± ( Φ 1 ) if k > 0 , i ± ( k Φ 1 ) = i ( Φ 1 ) if k < 0 .
i ± Φ 1 Ω Ω Φ 2 = i ± Φ 1 Ω Ω Φ 2 = i Φ 1 Ω Ω Φ 2 .
i ± Φ 1 0 0 Φ 2 = i ± ( Φ 1 ) + i ± ( Φ 2 ) , i + 0 Ω Ω 0 = i 0 Ω Ω 0 = r ( Ω ) .
i ± Φ 1 Ω Ω 0 = r ( Ω ) + i ± ( Ω Φ 1 Ω ) .
i + Φ 1 Ω Ω 0 = r Φ 1 , Ω , i Φ 1 Ω Ω 0 = r ( Ω ) if Φ 1 0 .
i ± Φ 1 Ω Ω Φ 2 = i ± ( Φ 1 ) + i ± ( Φ 2 Ω Φ 1 + Ω ) if C ( Ω ) C ( Φ 1 ) .
Proof of Theorem 1. 
According to the definition of the MSEM,
MSEM ( υ ˜ i ( Z ) ) = D ( υ i υ ˜ i ( Z ) ) and MSEM ( υ ˜ i ( Z i ) ) = D ( υ i υ ˜ i ( Z i ) ) .
Let C = MSEM ( υ ˜ i ( Z ) ) and consider (8) and (A8). Then,
i ± MSEM ( υ ˜ i ( Z ) ) MSEM ( υ ˜ i ( Z i ) )   = i ± C D ( υ i υ ˜ i ( Z i ) )
= i ± C σ 2 M i W i + L i Υ i M i W i + L i
= i ± Υ i Υ i ( W i + ) M i Υ i L i M i W i + Υ i L i Υ i σ 2 C i ± ( Υ i )
= i ± Υ i Υ i L i L i Υ i σ 2 C + Υ i 0 0 M i 0 W i W i 0 + Υ i 0 0 M i i ± ( Υ i )
= i ± 0 W i Υ i 0 W i 0 0 M i Υ i 0 Υ i Υ i L i 0 M i L i Υ i σ 2 C i 0 W i W i 0 i ± ( Υ i )
= i ± Υ i W i Υ i L i W i 0 M i L i Υ i M i σ 2 C L i Υ i L i r ( W i ) ,
where W i = Z i , Υ i Z i and M i = K , L i Υ i Z i . Here (A7) is applied to both expressions in (A9) and (A10) by using the fact Υ i = Υ i Υ i + Υ i and the column space inclusions
C ( K ) C ( Z i ) and C ( Υ i L i ) C ( Υ i ) with C Z i , Υ i Z i = C Z i , Υ i ,
respectively, and further, (A2) and (A4) are used with the elementary block matrix operations while the expression in (A11) is obtained. By substituting W i = Z i , Υ i Z i and M i = K , L i Υ i Z i into (A11) and using (A3)–(A5) with the elementary block matrix operations as well as the fact that
r Z i , Υ i Z i = r Z i , Υ i ,
the expression in (A11) is equivalently written as follows:
i ± Υ i Z i Υ i Z i Υ i L i Z i 0 0 K Z i Υ i 0 0 Z i Υ i L i L i Υ i K L i Υ i Z i σ 2 C L i Υ i L i r Z i , Υ i = i ± Υ i Z i Υ i L i Z i 0 K L i Υ i K σ 2 C L i Υ i L i + i ± Z i Υ i Z i r Z i , Υ i = i Υ i Υ i L i Z i L i Υ i L i Υ i L i σ 2 C K Z i K 0 + i ± Υ i Z i Z i 0 r ( Z i ) r Z i , Υ i = i Υ i Υ i L i Z i L i Υ i L i Υ i L i σ 2 MSEM ( υ ˜ i ( Z ) ) K Z i K 0 + i ± Υ i Z i Z i 0 r ( Z i ) r Z i , Υ i = i Υ i Υ i L i Z i L i Υ i L i Υ i L i K Z i K 0 0 I t 0 MSEM ( υ ˜ i ( Z ) ) 0 I t 0 + i ± Υ i Z i Z i 0 r ( Z i ) r Z i , Υ i .
We can apply (A7) to (A12) after substituting MSEM ( υ ˜ i ( Z ) ) in (7). Then, in a similar way to obtaining (A10), (A12) is equivalently written as
i Υ 0 Υ T i L i 0 0 Υ i Υ i L i Z i L i T i Υ L i Υ i L i Υ i L i K 0 Z i K 0 + Υ 0 0 0 0 K , L i T i Υ Z 0 0 0 W W 0 + × Υ 0 0 0 0 0 K , L i T i Υ Z 0 + i ± Υ i Z i Z i 0 r ( Z i ) r Z i , Υ i i ( Υ ) ,
where W = Z , Υ Z . We can apply (A7) to (A13) since
C ( Υ ) C ( W ) and C K , L i T i Υ Z C ( W ) .
From Lemma A2 and some congruence operations, (A13) is equivalently written as
i 0 Z Υ Z Υ 0 0 0 Z 0 0 0 0 K 0 Z Υ 0 0 0 0 Z Υ T i L i 0 Υ 0 0 Υ 0 Υ T i L i 0 0 0 0 0 Υ i Υ i L i Z i 0 K L i T i Υ Z L i T i Υ L i Υ i L i Υ i L i K 0 0 0 0 Z i K 0 + i ± Υ i Z i Z i 0 r ( Z i ) r Z i , Υ i i ( Υ ) r Z , Υ Z = i Υ Z Υ Z 0 Υ T i L i 0 Z 0 0 0 K 0 Z Υ 0 0 0 Z Υ T i L i 0 0 0 0 Υ i Υ i L i Z i L i T i Υ K L i T i Υ Z L i Υ i L i Υ i L i L i T i Υ T i L i K 0 0 0 Z i K 0 + i ± Υ i Z i Z i 0 r ( Z i ) r Z i , Υ i r Z , Υ = i Υ Z 0 Υ T i L i 0 Z 0 0 K 0 0 0 Υ i Υ i L i Z i L i T i Υ K L i Υ i 0 K 0 0 Z i K 0 + i ± Υ i Z i Z i 0 r ( Z i ) r Z i , Υ i r Z , Υ + i ( Z Υ Z ) = i ± Υ 0 Z 0 Υ T i L i 0 Υ i 0 Z i Υ i L i Z 0 0 0 K 0 Z i 0 0 K L i T i Υ L i Υ i K K 0 + i ± Υ i Z i Z i 0 r ( Z i ) r Z i , Υ i r Z , Υ + i Υ Z Z 0 r ( Z ) .
By using (A6) and writing G instead of the first matrix in (A14),
i + MSEM ( υ ˜ i ( Z ) ) MSEM ( υ ˜ i ( Z i ) ) = i + ( G ) r ( Z i ) r Z , Υ
and
i MSEM ( υ ˜ i ( Z ) ) MSEM ( υ ˜ i ( Z i ) ) = i ( G ) r Z i , Υ i r ( Z )
are obtained.
According to (A1), adding the equalities in (A15) and (A16) yields
r MSEM ( υ ˜ i ( Z ) ) MSEM ( υ ˜ i ( Z i ) ) = r ( G ) r ( Z i ) r ( Z ) r Z , Υ r Z i , Υ i .
Applying Lemma A1 to (A15)–(A17) yields (a)–(c). □

References

  1. Haupt, H.; Oberhofer, W. Stochastic response restrictions. J. Multivar. Anal. 2005, 95, 66–75. [Google Scholar] [CrossRef]
  2. Baksalary, J.K. Comparing stochastically restricted estimators in a linear regression model. Biom. J. 1984, 26, 555–557. [Google Scholar] [CrossRef]
  3. Güler, N.; Büyükkaya, M.E. Statistical inference of a stochastically restricted linear mixed model. AIMS Math. 2023, 8, 24401–24417. [Google Scholar] [CrossRef]
  4. Haslett, S.J.; Puntanen, S. Equality of BLUEs or BLUPs under two linear models using stochastic restrictions. Stat. Pap. 2010, 51, 465–475. [Google Scholar] [CrossRef]
  5. Ren, X. Corrigendum to “On the equivalence of the BLUEs under a general linear model and its restricted and stochastically restricted models” [Stat. Probab. Lett. 90 (2014) 1–10]. Stat. Probab. Lett. 2015, 104, 181–185. [Google Scholar] [CrossRef]
  6. Ren, X. The equalities of estimations under a general partitioned linear model and its stochastically restricted model. Commun. Stat.-Theory Methods 2016, 45, 6495–6509. [Google Scholar] [CrossRef]
  7. Theil, H. On the use of incomplete prior information in regression analysis. J. Am. Stat. Assoc. 1963, 58, 401–414. [Google Scholar] [CrossRef]
  8. Theil, H.; Goldberger, A.S. On pure and mixed statistical estimation in economics. Int. Econ. Rev. 1961, 2, 65–78. [Google Scholar] [CrossRef]
  9. Xu, J.; Yang, H. Estimation in singular linear models with stochastic linear restrictions. Commun. Stat.-Theory Methods 2007, 36, 1945–1951. [Google Scholar] [CrossRef]
  10. Büyükkaya, M.E. Comparison of predictors under constrained general linear model and its future observations. Commun. Stat.-Theory Methods 2024, 53, 8929–8941. [Google Scholar] [CrossRef]
  11. Güler, N.; Büyükkaya, M.E. Further remarks on constrained over-parameterized linear models. Stat. Pap. 2024, 65, 975–988. [Google Scholar] [CrossRef]
  12. Jiang, B.; Tian, Y. On best linear unbiased estimation and prediction under a constrained linear random-effects model. J. Ind. Manag. Optim. 2023, 19, 852–867. [Google Scholar] [CrossRef]
  13. Li, W.; Tian, Y.; Yuan, R. Statistical analysis of a linear regression model with restrictions and superfluous variables. J. Ind. Manag. Optim. 2023, 19, 3107–3127. [Google Scholar] [CrossRef]
  14. Tian, Y.; Wang, J. Some remarks on fundamental formulas and facts in the statistical analysis of a constrained general linear model. Commun. Stat. Theor. M. 2020, 45, 1201–1216. [Google Scholar] [CrossRef]
  15. Büyükkaya, M.E. Characterizing relationships between BLUPs under linear mixed model and some associated reduced models. Commun. Stat.-Simul. Comput. 2023, 52, 3438–3451. [Google Scholar] [CrossRef]
  16. Güler, N.; Büyükkaya, M.E. Notes on comparison of covariance matrices of BLUPs under linear random-effects model with its two subsample models. Iran. J. Sci. Tech. Trans. A 2019, 43, 2993–3002. [Google Scholar] [CrossRef]
  17. Güler, N.; Büyükkaya, M.E. Some remarks on comparison of predictors in seemingly unrelated linear mixed models. Appl. Math. 2022, 67, 525–542. [Google Scholar] [CrossRef]
  18. Güler, N.; Büyükkaya, M.E. Comparison of BLUPs under multiple partitioned linear models and its correctly-reduced models. Miskolc Math. Notes 2024, 25, 241–254. [Google Scholar] [CrossRef]
  19. Tian, Y. Some equalities and inequalities for covariance matrices of estimators under linear model. Stat. Pap. 2017, 58, 467–484. [Google Scholar] [CrossRef]
  20. Tian, Y. Matrix rank and inertia formulas in the analysis of general linear models. Open Math. 2017, 15, 126–150. [Google Scholar] [CrossRef]
  21. Tian, Y.; Guo, W. On comparison of dispersion matrices of estimators under a constrained linear model. Stat. Methods Appl. 2016, 25, 623–649. [Google Scholar] [CrossRef]
  22. Puntanen, S.; Styan, G.P.H.; Isotalo, J. Matrix Tricks for Linear Statistical Models: Our Personal Top Twenty, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  23. Tian, Y. Transformation approaches of linear random-effects models. Stat Method Appl. 2017, 26, 583–608. [Google Scholar] [CrossRef]
  24. Rao, C.R. Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular dispersion matrix. J. Multivar. Anal. 1973, 3, 276–292. [Google Scholar] [CrossRef]
  25. Goldberger, A.S. Best linear unbiased prediction in the generalized linear regression model. J. Am. Stat. Assoc. 1962, 57, 369–375. [Google Scholar] [CrossRef]
  26. Alalouf, I.S.; Styan, G.P.H. Characterizations of estimability in the general linear model. Ann. Statist. 1979, 7, 194–200. [Google Scholar] [CrossRef]
  27. Güler, N.; Büyükkaya, M.E. Inertia and rank approach in transformed linear mixed models for comparison of BLUPs. Commun. Stat. Theor. M. 2023, 52, 3108–3123. [Google Scholar] [CrossRef]
  28. Lu, C.; Gan, S.; Tian, Y. Some remarks on general linear model with new regressors. Stat. Probab. Lett. 2015, 97, 16–24. [Google Scholar] [CrossRef]
  29. Yang, H.; Ye, H.; Xue, K. A further study of predictions in linear mixed models. Commun. Stat. Theor. M. 2014, 43, 4241–4252. [Google Scholar] [CrossRef]
  30. Brownlee, K.A. Statistical Theory and Methodology in Science and Engineering, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1965. [Google Scholar]
  31. Tian, Y. Equalities and inequalities for inertias of Hermitian matrices with applications. Linear Algebra Appl. 2010, 433, 263–296. [Google Scholar] [CrossRef]
Table 1. Value and number of eigenvalues of matrix G.
Table 1. Value and number of eigenvalues of matrix G.
The Value of
Negative Eigenvalues
of G
The Frequency of
Encountering
the Eigenvalues
The Value of
Positive Eigenvalues
of G
The Frequency of
Encountering
the Eigenvalues
−7.511.221
−5.7412.281
−3.713.521
−3.5213.711
−3.515.741
−2.117.491
−2.2811.2020
−1.211
−11
−1.2010
Total19Total26
Table 2. The value and number of eigenvalues of the matrix G.
Table 2. The value and number of eigenvalues of the matrix G.
The Value of
Negative Eigenvalues
of G
The Frequency of
Encountering
the Eigenvalues
The Value of
Positive Eigenvalues
of G
The Frequency of
Encountering
the Eigenvalues
−911.221
−7.512.281
−6.113.51
−5.7413.521
−3.713.711
−3.5215.741
−2.9161
−2.2817.491
−1.2111.2018
−1.209
Total18Total26
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Güler, N. Analysis of Optimal Prediction Under Stochastically Restricted Linear Model and Its Subsample Models. Axioms 2024, 13, 882. https://doi.org/10.3390/axioms13120882

AMA Style

Güler N. Analysis of Optimal Prediction Under Stochastically Restricted Linear Model and Its Subsample Models. Axioms. 2024; 13(12):882. https://doi.org/10.3390/axioms13120882

Chicago/Turabian Style

Güler, Nesrin. 2024. "Analysis of Optimal Prediction Under Stochastically Restricted Linear Model and Its Subsample Models" Axioms 13, no. 12: 882. https://doi.org/10.3390/axioms13120882

APA Style

Güler, N. (2024). Analysis of Optimal Prediction Under Stochastically Restricted Linear Model and Its Subsample Models. Axioms, 13(12), 882. https://doi.org/10.3390/axioms13120882

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop