Next Article in Journal
Mathematical Model of Hepatitis B Virus Treatment with Support of Immune System
Next Article in Special Issue
An Exact and Near-Exact Distribution Approach to the Behrens–Fisher Problem
Previous Article in Journal
Explainable Machine Learning Methods for Classification of Brain States during Visual Perception
Previous Article in Special Issue
Cumulative Residual Tsallis Entropy-Based Test of Uniformity and Some New Findings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inference and Local Influence Assessment in a Multifactor Skew-Normal Linear Mixed Model

by
Zeinolabedin Najafi
1,
Karim Zare
1,*,†,
Mohammad Reza Mahmoudi
2,†,
Soheil Shokri
3 and
Amir Mosavi
4,5,6,7,*
1
Department of Statistics, Marvdasht Branch, Islamic Azad University, Marvdasht 73711-13119, Iran
2
Department of Statistics, Faculty of Science, Fasa University, Fasa 74616-86131, Iran
3
Department of Statistics, Lahijan Branch, Islamic Azad University, Lahijan 44169-39515, Iran
4
Faculty of Civil Engineering, Technische Universität Dresden, 01069 Dresden, Germany
5
John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
6
Institute of Information Society, University of Public Service, 1083 Budapest, Hungary
7
Institute of Information Engineering, Automation and Mathematics, Slovak University of Technology in Bratislava, 81243 Bratislava, Slovakia
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(15), 2820; https://doi.org/10.3390/math10152820
Submission received: 6 July 2022 / Revised: 26 July 2022 / Accepted: 4 August 2022 / Published: 8 August 2022
(This article belongs to the Special Issue Mathematical and Computational Statistics and Their Applications)

Abstract

:
This work considers a multifactor linear mixed model under heteroscedasticity in random-effect factors and the skew-normal errors for modeling the correlated datasets. We implement an expectation–maximization (EM) algorithm to achieve the maximum likelihood estimates using conditional distributions of the skew-normal distribution. The EM algorithm is also implemented to extend the local influence approach under three model perturbation schemes in this model. Furthermore, a Monte Carlo simulation is conducted to evaluate the efficiency of the estimators. Finally, a real data set is used to make an illustrative comparison among the following four scenarios: normal/skew-normal errors and heteroscedasticity/homoscedasticity in random-effect factors. The empirical studies show our methodology can improve the estimates when the model errors follow from a skew-normal distribution. In addition, the local influence analysis indicates that our model can decrease the effects of anomalous observations in comparison to normal ones.

1. Introduction

Linear mixed models (LMMs) are useful for the statistical analysis of correlated datasets such as longitudinal data. For simplicity, it is usually assumed that both random effects and random errors follow a normal distribution. Under these restrictions, there are several proposals for estimating LMM parameters in the literature; among these, one can refer to Harvill [1], Fellner [2], Khuri et al. [3] and Wu et al. [4]. For example, Harvill [1] and Fellner [2] obtained the maximum likelihood (ML) estimates of parameters in a multifactor normal LMM under heteroscedasticity of random-effect factors. They showed the estimates, in addition to being consistent, were asymptotically normally distributed. However, as pointed out by Zhong and Davidian [5], using these estimation methods may cause invalid statistical inferences when the data are asymmetric. Therefore, many authors have criticized the common use of the normality assumption (see, e.g., [6,7,8,9,10]).
From a practical perspective, the most frequently used method to achieve normality is to apply a transformation on the variables. Although such methods may provide suitable empirical results, they should not be used if a more reasonable theoretical model is available [11]. Thus, it is of great interest to develop estimation methods in statistical models with flexible distribution assumptions. In this sense, one of the simplest and applicable distributions that provides skewness and contains a normal distribution is the skew-normal distribution introduced by Azzalini [12]. Although it is old, it is still used in statistical models due to its flexibility and simplicity, especially in complex models where it is difficult to use new generalized skewed distributions. Some new works in this research area are [13,14]. In the literature, many authors have studied inferences on parameters in LMMs (with only one random-effect factor) where the random effects or the model random errors follow asymmetric and non-normal distributions (see, e.g., [5,7,8,9,15,16,17,18,19,20,21,22]). Verbeke and Lesaffre [15], through an extended simulation when the random-effect distribution was misspecified, showed that the standard errors of the parameters needed to be corrected. Arellano-Valle et al. [18] considered an LMM when both the random-effect factor and the random errors follow the SN distribution. Due to complexity, they derived marginal distributions and implemented an EM algorithm to obtain the ML estimates. They indicated that the estimates had more efficiency than normal estimates when the normality assumption was violated. Lachos et al. [19] presented an LMM when the random effects followed a multivariate SN independent distribution. They derived the ML estimates of the parameters based on an efficient EM algorithm. They also investigated a technique to predict the response variable. Kheradmandi and Rasekh [20] followed [18] and obtained the ML estimates in the LMM when the fixed effects were measured with non-negligible errors. In this work, a multifactor SN–LMM was considered with different variances for the random-effect factors to show how heteroscedasticity, as a freer assumption in random-effect factors, can improve our statistical results. Here, a SN–LMM is an LMM with SN distribution in the model random errors.
A diagnostic analysis is a necessary step in statistical analysis after parameter estimation. The local influence approach, a pioneering work of Cook [23], is one of the most important diagnostic tools for assessing the stability of the estimation parameters. Due to the complicated calculations of Cook’s local influence approach in statistical models with incomplete data, Zhu and Lee [24] developed Cook’s approach to these models based on the conditional expectation of a complete-data log likelihood at the E-step of the EM algorithm. To see some applications of Zhu and Lee’s approach, one can refer to [25,26,27,28]. Local influence analysis for LMMs based on SN distribution had been studied by Bolfarine [29], Montenegro et al. [30], and Zeller et al. [31]. All these works considered local influence diagnostics for an LMM based on SN distribution in the random-effect factor. Furthermore, all perturbation schemes were considered the same. In this work, besides parameter estimation, we developed Zhu and Lee’s local influence diagnostic measures for the LMM under different assumptions on random effects and random errors that were mentioned before. A different perturbation scheme was also considered concerning the previous works. The rest of the paper is structured as follows. In Section 2, we present the model and obtain distributional facts about the variables that will help us use the EM algorithm. In Section 3, parameter estimation and random effects prediction are derived via the EM algorithm. In Section 4, the local influence diagnostic measures for the LMM are extended based on the methodology proposed by Zhu and Lee [24]. The basic building blocks of three perturbation schemes are also derived. In Section 5, a simulation study is performed to compare the normal LMM and the SN-LMM, and then a real dataset is analyzed to perform an illustrative comparison. Discussion and conclusions of this paper are given in Section 6.

2. The Model Definition

Consider the following LMM:
Y = X β + U b + ε = X β + i = 1 m U i b i + ε ,
where β is a p × 1 vector of parameters, which are fixed effects; X and U = U 1 U 2 | U m are n × p and n × q known design matrices, respectively, where U i is an n × q i design matrix of the random-effect factor i ; b = b 1 , b 2 , , b m , where b i is a q i × 1 vector of unobservable random effects from N q 0 , σ i 2 I q i , i = 1 , , m ; ε is an n × 1 vector of unobservable random errors from S N n 0 , σ 2 I n , λ ε , that is n -dimensional SN distribution with skewness vector λ ε n . The variances σ 2 and   σ i 2 ,   i = 1 , , m are named variance components. We assume that b i , i = 1 , , m , and ε are mutually independent. One may also write b N q 0 , σ 2 Σ , where Σ is a block diagonal matrix with the i th block being γ i I q i , for γ i = σ i 2 / σ 2 , that are called the ratio of variance components.
The above assumptions conclude that Y | b S N n X β + U b , σ 2 I n , λ ε and so the joint distribution of the vectors Y and b is obtained as follows:
f y , b = 2 ϕ q b | 0 , σ 2 Σ ϕ n y | X β + U b , σ 2 I n
× Φ λ ε y X β U b / σ
where ϕ n | μ , Σ stands for the n -variate normal density function with mean μ and covariance matrix Σ and Φ represents the cumulative distribution function of N 0 , 1 .
From (2), the marginal density of Y would be as follows:
f y = q   2 ϕ q b | Σ U V 1 y X β , σ 2 Σ T ϕ n y | X β , σ 2 V
× Φ λ ε y X β U b / σ d b
= 2 ϕ n y | X β , σ 2 V E Φ λ ε y X β U B / σ
= 2 ϕ n y | X β , σ 2 V Φ λ ε V 1 y X β σ δ ;   y n
where V = I n + U Σ U = I n + i = 1 m γ i U i U i , T = I q + U U Σ 1 so that Σ T is a symmetric, non-singular matrix; B ~ N q Σ U V 1 y X β , σ 2 Σ T ; and where δ 2 = 1 + λ ε I n V 1 λ ε . Therefore, Y has a generalized SN distribution.
Based on the distribution of Y , the log-likelihood function of θ = β , σ 2 , γ , λ ε is given by
l θ ; X , y n 2 log σ 2 1 2 log V 1 2 σ 2 y X β V 1 y X β + log Φ λ ε V 1 y X β σ δ ,
where γ = γ 1 , , γ m .
As can be seen, there is no obvious solution for the direct maximization of Equation (3), and the likelihood function has to be maximized numerically. Using numerical approaches, besides the high computational costs and lack of robustness to the starting values, causes some problems for the maximization due to the term log Φ . Therefore, corresponding to previous studies in the field of using the skew family in modeling (see, e.g., [8,9,18,19,20,22,27]), an EM algorithm was applied to reduce computation complexity with high efficiency.
The EM algorithm, introduced by Dempster et al. [32], is a famous iterative algorithm for ML estimation in incomplete data models. One of the major reasons for its popularity is the M-step that includes maximization of a likelihood function based on complete data, which is often computationally simple. It is also not very sensitive to the starting parameter values.
Let T N ξ , η 2 ; a , b represent the truncated normal distribution with parameters ξ and η 2 and truncation range a , b . If the distribution of y is written as
f y = 2 ϕ n y | X β , σ 2 V 0 + ϕ z | λ ε V 1 y X β , σ 2 δ 2 d z ;   y n ,
the joint distribution of Y and the missing variable Z will be
f y , z = 2 ϕ n y | X β , σ 2 V ϕ z | λ ε V 1 y X β , σ 2 δ 2 ;   y n ,   z > 0 .
Based on the above joint distribution, the conditional distribution of Z y is obtained as
f z y = ϕ z | λ ε V 1 y X β , σ 2 δ 2 0 + ϕ z | λ ε V 1 y X β , σ 2 δ 2 d z ;   z > 0 .
Hence, Z y T N λ ε V 1 y X β , σ 2 δ 2 ;   0 , + . Now, with the help of the properties of the truncated normal distribution [33], we have
E Z y = λ ε V 1 y X β + σ δ W λ ε V 1 y X β σ δ ,
and
E Z 2 y = λ ε V 1 y X β 2 + σ 2 δ 2
+ σ δ λ ε V 1 y X β W λ ε V 1 y X β σ δ ,
To predict the random effects, we first need the conditional distribution of b y given by
f b y = f y , b f y = 1 α ϕ q b | Σ U V 1 y X β , σ 2 Σ T Φ λ ε y X β U b / σ ,
where
α = Φ λ ε V 1 y X β σ δ .
Now, the conditional log-likelihood function of b ––given θ , X and y ––is obtained as
l * b ; θ , X , y q 2 log σ 2 1 2 log Σ T 1 2 σ 2 b Σ U V 1 y X β Σ T 1 b Σ U V 1 y X β + log Φ λ ε y X β U b / σ log α .
As seen again, l * like l has the term log Φ and so, to predict b , based on the ML method, we use the EM algorithm.
To do this, we first rewrote the conditional distribution of b y as
f b y = 1 α ϕ q b | Σ U V 1 y X β , σ 2 Σ T × 0 + ϕ z * | λ ε y X β U b , σ 2 d z * ;   b q ,   y n .
So, the conditional distribution of b and the missing variable Z * given y is equal to
f b , z * y = 1 α ϕ q b | Σ U V 1 y X β , σ 2 Σ T × ϕ z * | λ ε y X β U b , σ 2 ;   b q ,   y n ,   z * > 0 .
The above equation concludes that the conditional distribution of the missing variable Z * given b and y is equal to
f z * b , y = ϕ z * | λ ε y X β U b , σ 2 0 + ϕ z * | λ ε y X β U b , σ 2 d z * ,   z * > 0 .
As seen, Z * b , y T N λ ε y X β U b , σ 2 ; 0 , + and hence,
E Z * b , y = λ ε y X β U b + σ W λ ε y X β U b / σ ,
and
E Z * 2 b , y = λ ε y X β U b 2 + σ 2 + σ λ ε y X β U b W λ ε y X β U b / σ .

3. Parameter Estimation via the EM Algorithm

Now, let y = y 1 , , y n be a vector of observed responses and z be a missing observation. Then, the complete log-likelihood function associated with y c = y , z will be obtained as
l c θ ; X , y c = C n 2 log σ 2 1 2 log V 1 2 σ 2 y X β V 1 y X β 1 2 log σ 2 δ 2 1 2 σ 2 δ 2 z λ ε V 1 y X β 2
where C does not depend on unknown parameters.
If θ ^ r = β ^ r , σ 2 ^ r , γ ^ r , λ ε ^ r is the estimate in the r th iteration, then the expected complete log-likelihood function would be
Q θ θ ^ r = E l c θ ; X , y c θ ^ r n 2 log σ 2 1 2 log V 1 2 σ 2 y X β V 1 y X β 1 2 log σ 2 δ 2 1 2 σ 2 δ 2 z 2 ^ r 2 z ^ r λ ε V 1 y X β + λ ε V 1 y X β 2
where z ^ r = E Z y ; θ ^ r and z 2 ^ r = E Z 2 y ; θ ^ r are calculated by substituting θ ^ r in Equations (4) and (5), respectively.
To obtain a new estimate θ ^ r + 1 , the M-step maximizes Q θ θ ^ r with respect to θ . This was obtained as a solution of the following equations:
Q θ θ ^ r β = 0 ,
Q θ θ ^ r σ 2 = 0 ,
and
Q θ θ ^ r γ i = 0 ;   i = 1 , , m .
If we define h = V 1 λ ε , then from the Equation (9), the estimation of β , is given by
β ^ r + 1 = X V ^ r 1 + h ^ r h ^ r / δ 2 ^ r X 1 × X V ^ r 1 + h ^ r h ^ r / δ 2 ^ r y z ^ r X h ^ r / δ 2 ^ r
Before the estimation of variance components, we presented a realized value of the random effects. In a similar way for fixed effects (See Appendix A.1 for more details.), the ML prediction of b was given by
b ^ r + 1 = Σ ^ r U V ^ r 1 [ I n + 1 δ 2 ^ r λ ε ^ r λ ε ^ r V ^ r 1 y X β ^ r 1 δ 2 ^ r z * ^ r λ ε ^ r ] .
where z * ^ r = E Z * b ^ r , y ; θ ^ r and z * 2 ^ r = E Z * 2 b ^ r , y ; θ ^ r were calculated by substituting θ ^ r and b ^ r into Equations (7) and (8), respectively.
Then, for the i th random-effect factor, we have
b i ^ r + 1 = γ i ^ r U i V ^ r 1 [ I n + 1 δ 2 ^ r λ ε ^ r λ ε ^ r V ^ r 1 y X β ^ r 1 δ 2 ^ r z * ^ r λ ε ^ r ] .
From Equation (9), the estimates of variance components were derived as
σ 2 ^ r + 1 = 1 n + 1 [ y X β ^ r V ^ r 1 + h ^ r h ^ r / δ 2 ^ r y X β ^ r + z 2 ^ r 2 z ^ r h ^ r y X β ^ r / δ 2 ^ r ] .
and
σ i 2 ^ r + 1 = γ i ^ r 2 q i tr T i i { y X β ^ r V ^ r 1 U i U i V ^ r 1 y X β ^ r σ 2 ^ r δ 2 ^ r h ^ r U i U i h + 1 δ 2 ^ r 2 [ z 2 ^ r 2 z ^ r h ^ r y X β ^ r + h ^ r y X β ^ r 2 ] h ^ r U i U i h ^ r + 2 δ 2 ^ r [ y X β ^ r h ^ r h ^ r U i U i V ^ r 1 y X β ^ r z ^ r h ^ r U i U i V ^ r 1 y X β ^ r ] }
where T i i is i th diagonal block of the matrix T (See Appendix A.2 for more details.). By substituting z ^ r and z 2 ^ r with z * ^ r and z * ^ r 2 , respectively, in the above equation, one can obtain another estimator for σ i 2 based on b i ^ r + 1 similar to its corresponding estimator in the normal LMM. This estimator would be as follows:
σ i 2 ˜ r + 1 = 1 q i tr T i i b i ^ r + 1 b i ^ r + 1 σ 2 ˜ r δ ^ r d i d i ,
where d i = U i h ^ r .
Finally, the ML estimates of skewness parameters would be
λ ε ^ r + 1 = argmax λ ε Q β ^ r + 1 , σ 2 ^ r + 1 , γ ^ r + 1 , λ ε θ ^ r ,
where Q β ^ r + 1 , σ 2 ^ r + 1 , γ ^ r + 1 , λ ε θ ^ r is Q θ θ ^ r evaluated at updated β ^ r + 1 , σ 2 ^ r + 1 and γ ^ r + 1 .
The above algorithm stops when a reasonable convergence rule is satisfied (e.g., θ ^ r + 1 θ ^ r < 10 6 ). A set of adequate starting values can be obtained by solving the normal LMM for β , σ 2 and σ i 2 , i = 1 , , m and the sample skewness coefficient of the residuals or zero values for λ ε . But, as recommended in the literature, the EM algorithm should be run several times with different starting values.

4. Local Influence Analysis

Cook’s local influence method was used to evaluate the influence of various minor model perturbations on the parameter estimates. Inspired by the general idea of the EM algorithm, Zhu and Lee [24] generalized the local influence diagnostic method to general statistical models with incomplete data based on a Q-function. Here, we briefly studied a natural extension of this procedure to SN–LMM. In this section, we assumed that the γ i ’s were known. If the γ i ’s were unknown, the ML estimates would have been placed back into Σ , so the vector θ would have been θ = β , σ 2 , λ ε .
If we let ω = ω 1 , ω 2 , , ω n be a n-dimensional vector of perturbations varying in open region Ω n , the perturbed complete-data log-likelihood function would be denoted by l c θ , ω ; X , y c . It is assumed that there exists w 0 , a vector of no perturbation, such that l c θ , ω 0 ; X , y c = l c θ ; X , y c for all θ . Let Q θ ^ ω θ ^ r be the maximum value of the Q θ , ω θ ^ r = E l c θ , ω ; X , y c , where θ ^ ω denotes the ML estimate under Q θ , ω θ ^ r . To evaluate the influence of minor perturbations on the ML estimate θ ^ , one may regard the Q-displacement function, defined as follows:
Q D ω = 2 Q θ ^ θ ^ r Q θ ^ ω θ ^ r
Zhu and Lee [24] suggested studying the local behavior of Q D ω around ω 0 . Corresponding to their proposal, the normal curvature in the direction of some unit vector d , given by C Q D , d = 2 d Q ¨ ω 0 d , can be employed to summarize the local behavior of the Q-displacement function, where
Q ¨ ω 0 = Δ ω 0 Q ¨ θ θ ^ 1 Δ ω 0
in which
Q ¨ θ θ ^ = 2 Q θ θ ^ r θ θ θ = θ ^
and
Δ ω = 2 Q θ , ω θ ^ r θ ω θ = θ ^ ω .
Since most influence measures proposed in the statistical literature are closely related to a spectral decomposition of 2 Q ¨ ω 0 , we used this expression to detect influential observations. Let k = 1 n ξ k e k e k be the spectral decomposition of 2 Q ¨ ω 0 where ξ i , e i : i = 1 , , n are the eigenvalue–eigenvector pairs of the matrix 2 Q ¨ ω 0 with ξ 1 ξ q ,   ξ q + 1 = = ξ n = 0 and e k = e k 1 , , e k q :   k = 1 , , n is the associated orthonormal basis.
Following Zhu and Lee [24] and Lu and Song [34], the assessment of influential observations was based on
M 0 i = k = 1 q ξ ˜ k e k i 2 ,   i = 1 , , n
where ξ ˜ k = ξ k / ξ 1 + + ξ q . The influence measure M 0 i may be obtained through
B Q D , d i = 2 d i Q ¨ ω 0 d i tr 2 Q ¨ ω 0
where d i is an n × 1 vector with the ith element equal to one and all other elements equal to zero. Moreover, corresponding to Lee and Xu [35], we used the cut-off point 1 / n + c * S M 0 to consider the ith observation as influential, where c * is a constant, chosen according to the real application, and S M 0 denotes the standard deviation of M 0 i :   i = 1 , , n .

4.1. The Hessian Matrix

To achieve the local influence diagnostic measures for a particular perturbation scheme, we needed to compute 2 Q θ θ ^ r θ θ . It follows from (9) that the Hessian matrix has elements given by
2 Q θ θ ^ r β β = 1 σ 2 X V 1 + 1 δ 2 h h X ,
2 Q θ θ ^ r β σ 2 = 1 σ 4 X V 1 + 1 δ 2 h h y X β z ^ r δ 2 h ,
2 Q θ θ ^ r β λ ε = 2 σ 2 δ 4 z ^ r h y X β X h λ ε h 1 σ 2 δ 2 X V 1 z ^ r h y X β I n λ ε y X β V 1 ,
2 Q θ θ ^ r σ 2 σ 2 = n + 1 2 σ 4 1 σ 6 y X β V 1 y X β 1 σ 6 δ 2 z 2 ^ r 2 z ^ r h y X β + h y X β 2 ,
2 Q θ θ ^ r σ 2 λ ε = 1 σ 4 δ 4 z 2 ^ r 2 z ^ r h y X β + h y X β 2 λ ε h 1 σ 4 δ 2 z ^ r h y X β y X β V 1 ,
2 Q θ θ ^ r λ ε λ ε = 2 δ 4 λ ε h λ ε h 1 δ 2 I n V 1 1 σ 2 δ 4 z 2 ^ r 2 z ^ r h y X β + h y X β 2 4 δ 2 λ ε h λ ε h I n V 1 2 σ 2 δ 4 z ^ r h y X β λ ε h y X β V 1 1 σ 2 δ 2 V 1 y X β 2 δ 2 z ^ r h y X β λ ε h + y X β V 1 .

4.2. Perturbation Schemes

In this section, we present three distinct perturbation schemes for the model defined in (1).

4.2.1. Perturbation of the Response Variable

A perturbation of the response variable y is defined as y ω = y + s y ω , where s y is the standard deviation of y . In this case, ω 0 = 0 and
Q θ , ω θ ^ r n 2 log σ 2 1 2 log V 1 2 σ 2 y ω X β V 1 y ω X β 1 2 log σ 2 δ 2 1 2 σ 2 δ 2 z 2 ^ r 2 z ^ r h y ω X β + h y ω X β 2 .
From (10), the matrix
Δ ω 0 = 2 Q θ , ω θ ^ r θ ω ω = ω 0
has the following elements.
Δ β = s y σ 2 X V 1 + 1 δ 2 h h
Δ σ 2 = s y σ 4 y X β V 1 s y σ 4 δ 2 z ^ r h y X β h
Δ λ ε = 2 s y σ 2 δ 4 z ^ r h y X β λ ε h h + s y σ 2 δ 2 z ^ r h y X β V 1 V 1 y X β h

4.2.2. Perturbation of the kth Column of the Matrix X

We considered altering the kth column matrix X , i.e., x k , by taking x k ω = x k + s k ω where s k is the standard deviation of x k and ω 0 = 0 represents no perturbation. In this case, the perturbed Q -function took the form
Q θ , ω θ ^ r n 2 log σ 2 1 2 log V 1 2 σ 2 y X ω β V 1 y X ω β 1 2 log σ 2 δ 2 1 2 σ 2 δ 2 z 2 ^ r 2 z ^ r h y X ω β + h y X ω β 2 ,
where X ω = x 1 ω , , x p ω . It follows from (11), that the elements of the matrix Δ ω 0 were given by
Δ β = s t σ 2 u t y X β V 1 β t X V 1 s t σ 2 δ 2 z ^ r h y X β u t h + β t X h h
Δ σ 2 = s t β t σ 4 y X β V 1 + s t β t σ 4 δ 2 z ^ r h y X β h
Δ λ ε = s t β t σ 2 z ^ r h y X β 2 δ 4 λ ε h h 1 δ 2 V 1 + 1 δ 2 V 1 y X β h

4.2.3. Perturbation of the Dispersion Matrix of the Errors

We modified the dispersion matrix of errors, i.e., σ 2 I n , to σ 2 D ω where D ω is a diagonal matrix with diagonal elements ω = ω 1 , ω 2 , , ω n . The point representing no perturbation is ω 0 = 1 , 1 , , 1 . In this case, the perturbed Q -function was obtained as
Q θ , ω θ ^ r n 2 log σ 2 1 2 log V ω 1 2 σ 2 y X β V ω 1 y X β 1 2 log σ 2 δ 2 ω 1 2 σ 2 δ 2 ω z 2 ^ r 2 z ^ r h ω y X β + h ω y X β 2 ,
where V ω = D ω + U Σ U , h ω = V 1 ω λ ε , and δ 2 ω = 1 + λ ε D ω D ω V 1 ω D ω λ ε . From (12), we obtained the elements of Δ ω 0 .
The kth column of the matrix Δ β was given by
1 σ 2 X c k c k y X β + 1 σ 2 δ 4 d k λ ε h 2 z ^ r h y X β X h 1 σ 2 δ 2 [ λ ε c k c k y X β X h z ^ r h y X β X c k c k λ ε ]
where c k is the kth column of matrix V 1 . Also, the kth element of the vector Δ σ 2 was obtained by
1 2 σ 2 y X β c k c k y X β 1 2 σ 4 δ 4 d k λ ε h 2 z 2 ^ r 2 z ^ r h y X β + h y X β 2 + 1 σ 4 δ 2 z ^ r h y X β λ ε c k c k y X β
Finally, the kth column of the matrix Δ λ ε was achieved by
r k 1 σ 2 δ 2 [ [ z 2 ^ r 2 z ^ r h y X β + h y X β 2 ] ( 2 r k + 1 δ 2 H k λ ε ) + 1 δ 2 d k λ ε h 2 z ^ r h y X β V 1 y X β λ ε c k c k y X β [ 2 δ 2 z ^ r h y X β I n V 1 λ ε + V 1 y X β ] + z ^ r h y X β c k c k y X β ]
where
H k = I n V 1 D d k I n V 1 ,
and
r k = 1 δ 4 d k λ ε h 2 I n V 1 λ ε 1 δ 2 H k λ ε .

5. Empirical Studies

In this section, we presented a simulation study and a real data example. The R software (version 4.1.2) Vienna, Austria, [36] was used to conduct all programs.

5.1. Simulation Study

Here, we considered a Monte Carlo simulation to evaluate the performance of the ML estimates in finite sample sizes. The model was taken as follows:
Y i j = x i j 1 β 1 + x i j 2 β 2 + b 1 j + ε i j ;   i = 1 , , s ;   j = 1 , , q ,
where q , according to [37], is the number of independent clusters, and s is the cluster size in a longitudinal study; hence, the total sample size was n = s q . If this model were represented in a matrix form as in (1), we would have m = 1 , b = b 1 = b 11 , ,   b 1 q ; Y = Y 11 , ,   Y 1 q , Y 21 , ,   Y 2 q , , Y s 1 , ,   Y s q ; and X = x 1 , x 2 , where x k = x 11 k , ,   x 1 q k ,   x 21 k , ,   x 2 q k , , x s 1 k , ,   x s q k and ε has the same structure as Y . We considered the following combinations for simulation: q = 50 or 100 and s = 3 , which are usual sample sizes in longitudinal studies, β 1 = 1 , β 2 = 2 , x k ~ N n 0 , I n , b 1 ~ N q 0 , σ 1 2 I q , ε ~ S N n 0 , σ 2 I n , λ ε = λ 1 = 3 , λ 2 = 0 ,   λ 3 = 3 1 q / n , where 1 q is a q × 1 vector of 1s. To see the effect of variability in data on the estimation of the parameters, we took σ 1 2 = 0.5 2 and σ 2 = 0.4 2 as a low dispersion and σ 1 2 = 0.9 2 and σ 2 = 0.8 2 as a high dispersion in the response data. We also considered two cases for obtaining our estimates: (I) without taking into account skewness in the model and using the ML estimates under normal errors, which is called here the Normal estimates; and (II), using the estimates in this work named by the SN estimates. For each combination of the parameters, 1000 iterations were performed. We applied the mvtnorm and sn packages to generate random samples from the multivariate normal and SN distributions, respectively.
In each iteration, we obtained parameter estimates based on both scenarios, and then the mean and the standard deviation (SD) of the Normal and SN estimators were calculated. The summary results are presented in Table 1, Table 2, Table 3 and Table 4.
It can be seen from Table 1, Table 2, Table 3 and Table 4, that the ML estimates of fixed effects were unbiased for both cases. However, the SN estimates had a lower dispersion compared to the Normal estimates. Both cases showed better results by increasing the sample size, but the SN estimates performed better. Increasing the values of variance components did not have much effect on the biases, but their SDs increased, and the precision of the estimates decreased. In addition, we observed that the Normal estimates of variance components were biased and hade larger SDs. In contrast, the SN estimates were unbiased and had smaller SDs. As was seen, increasing the values of variance components had a much worse effect on the biases and precision of the Normal estimates of the variance components. However, for the SN estimates, there were some rises in SDs, which was completely natural, but we still had unbiasedness. Furthermore, in both aspects (bias and SD), the SN estimates had better performance concerning the Normal estimates. For these, increasing the sample size did not affect demolishing biases; however, it decreased the SD to some extent. Nevertheless, it decreased bias and SD for the SN estimates. The simulation results in Table 1, Table 2, Table 3 and Table 4 showed that the skewness estimates were unbiased. These estimates were made worse by increasing variability in the data and made better by increasing the sample size (see Figure 1 and Figure 2). Furthermore, the histograms in Figure 1 and Figure 2 indicate that the distribution of the estimators, even for small sample sizes, was approximately normal.

5.2. An Example

The metallic oxide data given by Fellner [2] was used to illustrate the usefulness of our method in applications. Following Fellner [2], a nested model was fitted with raw material for fixed effects, while the random effects were lot, sample, chemist, and analysis. We considered normal and SN distributions for the random errors in this model for comparative purposes, and both assumptions (heteroscedasticity and homoscedasticity) in the random-effect factors were considered. Therefore, we had four scenarios:
(a)
Normal random errors and homoscedasticity in random-effect factors.
(b)
SN random errors and homoscedasticity in random-effect factors.
(c)
Normal random errors and heteroscedasticity in random-effect factors.
(d)
SN random errors and heteroscedasticity in random-effect factors.
The estimates of the parameters and some model selection criteria, including the log-likelihood values ( l θ ), AIC, and BIC, are given in Table 5. In all cases, the Type 1 mean was approximately the same. The Type 2 mean was the same in the Normal cases (homoscedasticity and heteroscedasticity). This was approximately true in the SN cases as well. Under both assumptions in random-effect factors, the Type 2 mean in the SN case was greater than for the Normal case. In addition, the variance of the error term (analysis variance) and the lot variance were reduced in the SN case compared to the Normal case. As we expected, the model selection criteria showed that the fitted model under heteroscedasticity and SN random errors had the best fit.
We continued by conducting the local influence approach to detect influential observations based on M 0 for this dataset. Figure 3, Figure 4, Figure 5 and Figure 6 depict, respectively, the index plots of M 0 for all scenarios under perturbations of the response variable, the dispersion matrix of the errors, and the columns of matrix X . In all the perturbation schemes, we used c * = 3 to construct the benchmarks. Furthermore, the influential observations are numbered in all figures.
Figure 3 indicates that observation #192 stood out as the most influential of the four scenarios under response variable perturbation. There were some suspicious points in the three first scenarios that were regular points in scenario 4. It could be claimed that the effects of these points in this scenario were controlled as the most flexible model in this work. Under the perturbation of the dispersion matrix of the errors in Figure 4, observation #192 again appeared as the most influential. As can be seen, the effects of other observations in scenario 4 were lower when compared with other scenarios. For the perturbation of the columns of matrix X , as depicted in Figure 5 and Figure 6, the results were approximately the same. In these schemes, observation #192 seemed to be the most influential again. Furthermore, for the two first scenarios (the models under homoscedasticity), observation #191 was also influential. Except for the two last scenarios (the models under heteroscedasticity), observation #234 was an influential case. Generally, in all figures, suspicious points in the less flexible models were more numerous than in the more flexible models. In other words, when we used a more flexible model, we decreased the effects of anomalous observations in our statistical results.

6. Conclusions

The study of a multifactor normal LMM under heteroscedasticity was done by authors such as [1,2]. They showed that the ML method for estimation of the parameters performed well and the estimates had good properties such as consistency and asymptotic normal distribution. When the normality test of model errors or random effects was rejected, the derived estimates did not lead to satisfactory results. Therefore, LMMs (only with a random-effect factor) based on skewed distributions were presented by several authors. When the normality assumption did not hold for the model errors or the random-effect factor, these models performed well in comparison to those under the normality assumption. Additionally, diagnostic analyses showed these models decreased the effect of outliers. Recent research showed that new generalized skewed distributions usually have a better fit than simpler skewed distributions (see e.g., [38,39]). Clearly, using these distributions in LMMs can be also considered (see e.g., [40]), but, as mentioned before, the complexity of calculations in complicated models makes a case for using the simple but flexible SN distribution (See using SN distribution in some new complicated models [13,14]). Therefore, we considered SN distribution for the model errors in the multifactor LMM under heteroscedasticity in random-effect factors. Our main goals involved parameter estimation and the local influence method for the multifactor SN–LMM under heteroscedasticity in random-effect factors. At first, we expanded an EM-based algorithm, as many in the literature proposed, to estimate the model parameters. We also obtained a closed form to estimate variance components using this method. Then, we applied Zhu and Lee’s approach to extend the local influence method to this model. Empirical studies, a simulation study and a real data example, were carried out to see the behavior of our estimators. Our findings followed previous results in this field. The simulation results––consistency, low dispersion, and asymptotic normal distribution––showed that the estimators performed well, even for finite sample sizes. It was also observed that ignorance of skewness when the error model followed from a skewed distribution like SN made unsuitable outcomes. Finally, through a real example, it was shown that taking into account both heteroscedasticity in random-effect factors and election a skewed distribution for random errors in the fitted model improved statistical results in comparison to other works that considered at most one of them. Additionally, in this case, we observed the robustness of the ML estimators through the local influence method. Finally, any skewed distributions contained symmetrical distributions in special cases. When the assumption of symmetry held for random variables in a sensitivity analysis of any model, skewed distributions were not recommended due to additional parameter costs. Extending this work when both random-effect factors and model errors have SN distribution is theoretically and computationally hard, but that will be our goal in a subsequent work. Moreover, generalized skewed distributions for the model errors or the random-effect factors in the multifactor LMM, along with diagnostic measures, are proposed for future work.

Author Contributions

Conceptualization, Z.N. and M.R.M.; Formal analysis, Z.N. and K.Z.; Investigation, Z.N., M.R.M., S.S. and A.M.; Methodology, Z.N., K.Z. and M.R.M.; Project administration, K.Z., M.R.M. and A.M.; Resources, M.R.M.; Software, M.R.M. and A.M.; Supervision, K.Z., M.R.M. and A.M.; Validation, S.S.; Visualization, A.M.; Writing—original draft, Z.N., K.Z., M.R.M. and S.S.; Writing—review & editing, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data can be available from the authors upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EMExpectation Maximization
LMMLinear Mixed Model
SNSkew-Normal
MLMaximum Likelihood
SDStandard Deviation
AICThe Akaike Information Criterion
BICThe Bayesian Information Criterion

Appendix A

Appendix A.1. ML Prediction of Random Effects

Let y = y 1 , , y n and b = b 1 , b 2 , , b m be vectors of observed responses and the random effects, respectively, and z * be a missing observation. It follows from (6) that the conditional complete log-likelihood function of b given θ , X and y c * = y , z * may be expressed as
l c * b ; θ , X , y c * = C q + 1 2 log σ 2 1 2 log Σ T 1 2 σ 2 b Σ U V 1 y X β Σ T 1 b Σ U V 1 y X β 1 2 σ 2 z * λ ε y X β U b 2 log α ,
where C does not depend on unknown parameters. If θ ^ r and b ^ r are the estimates in the r th iteration, the expected conditional complete log-likelihood function would be
Q * b θ ^ r , b ^ r = E [ l c * b ; θ , X , y c * θ ^ r , b ^ r ] q + 1 2 log σ 2 1 2 log Σ T 1 2 σ 2 { b [ Σ T 1 + U λ ε λ ε U b ] 2 b U y X β z * ^ r U λ ε + U λ ε λ ε y X β + y X β I n V 1 + λ ε λ ε y X β 2 z * ^ r λ ε y X β + z * 2 ^ r }
Then, the M-step maximizes Q * b θ ^ r , b ^ r with respect to b . So, we solve the following equation:
Q * b θ ^ r , b ^ r b = 1 σ 2 [ ( Σ T 1 + U λ ε λ ε U ) b U y X β z * ^ r U λ ε + U λ ε λ ε y X β ] = 0 .

Appendix A.2. ML Estimation of Variances of Random Effects

To find a solution for γ i and hence for σ i 2 , i = 1 , , m , in Equation (9) using the relations V = I q + U U Σ = T 1 , V 1 / γ i = V 1 U i U i V 1 , Σ / γ i = diag 0 , , 0 , I q i , 0 , , 0 and
log I q + U U Σ / γ i = tr T U U Σ γ i = γ i 1 q i tr T i i ,
we have
Q θ θ ^ r γ i = 1 2 γ i 1 q i tr T i i + 1 2 σ 2 δ 2 z 2 ^ r 2 z ^ r h y X β + h y X β 2 h U i U i h + 1 2 σ 2 y X β V 1 U i U i V 1 y X β 1 2 δ h U i U i h + 1 σ 2 δ y X β h h U i U i V 1 y X β z ^ r h U i U i V 1 y X β
Then, by solving the equations
Q θ θ ^ r γ i = 0 ,   i = 1 , , m
the ML estimates of γ i ’s are obtained. Thus, the ML estimate of σ i 2 is taken to satisfy
σ i 2 ^ = σ 2 ^ γ i ^ ,   i = 1 , , m .

References

  1. Harvill, D.A. Maximum likelihood approaches to variance component estimation and related problems (with discussion). J. Ame. Stat. Assoc. 1977, 72, 320–340. [Google Scholar] [CrossRef]
  2. Fellner, W.H. Robust estimation of variance components. Technometrics 1986, 28, 51–60. [Google Scholar] [CrossRef]
  3. Khuri, A.I.; Methew, T.; Sinha, B.K. Statistical Tests for Mixed Linear Models; John Wiley: New York, NY, USA, 1998. [Google Scholar]
  4. Wu, M.X.; Yu, K.F.; Liu, A.Y.; Ma, T.F. Simultaneous optimal estimation in linear mixed models. Metrika 2012, 75, 471–489. [Google Scholar] [CrossRef]
  5. Zhang, D.; Davidian, M. Linear mixed models with flexible distributions of random effects for longitudinal data. Biometrics 2001, 57, 795–802. [Google Scholar] [CrossRef] [PubMed]
  6. Pinheiro, J.C.; Liu, C.H.; Wu, Y.N. Efficient algorithms for robust estimation in linear mixed-effects models using a multivariate t-distribution. J. Comput. Gr. Stat. 2001, 10, 249–276. [Google Scholar] [CrossRef]
  7. Ghidey, W.; Lesaffre, E.; Eilers, P. Smooth random effects distribution in a linear mixed model. Biometrics 2004, 60, 945–953. [Google Scholar] [CrossRef]
  8. Lin, T.I.; Lee, J.C. Estimation and prediction in linear mixed models with skew-normal random effects for longitudinal data. Stat. Med. 2008, 27, 1490–1507. [Google Scholar] [CrossRef]
  9. Lachos, V.H.; Dey, D.K.; Cancho, V.G. Robust linear mixed models with skew-normal independent distributions from a Bayesian perspective. J. Stat. Plan. Inference 2009, 139, 4098–4110. [Google Scholar] [CrossRef]
  10. Ye, R.D.; Wang, T.; Gupta, A.K. Distribution of matrix quadratic forms under skew-normal settings. J. Multivar. Anal. 2014, 131, 229–239. [Google Scholar] [CrossRef]
  11. Azzalini, A.; Capitanio, A. Statistical applications of the multivariate skew normal distributions. J. Roy. Stat. Soc. Ser. B 1999, 61, 579–602. [Google Scholar] [CrossRef]
  12. Azzalini, A. A class of distributions which includes the normal ones. Scand. J. Statist. 1985, 12, 171–178. [Google Scholar]
  13. Hosseini, F.; Karimi, O. Approximate pairwise likelihood inference in SGLM models with skew normal latent variables. J. Comput. Appl. Math. 2021, 398, 113692. [Google Scholar] [CrossRef]
  14. Ju, Y.; Yang, Y.; Hu, M.; Dai, L.; Wu, L. Bayesian Influence Analysis of the Skew-Normal Spatial Autoregression Models. Mathematics 2022, 10, 1306. [Google Scholar] [CrossRef]
  15. Verbeke, G.; Lesaffre, E. The effect of misspecifying the random effects distribution in linear mixed models for longitudinal data. Comput. Stat. Data. Anal. 1997, 23, 541–556. [Google Scholar] [CrossRef]
  16. Tao, H.; Palta, M.; Yandell, B.S.; Newton, M.A. An estimation method for the semiparametric mixed effects model. Biometrics 1999, 55, 102–110. [Google Scholar] [CrossRef]
  17. Ma, Y.; Genton, M.G.; Davidian, M. Linear mixed models with flexible generalized skew-elliptical random effects. In SKEW-Elliptical Distributions and Their Applications: A Journey Beyond Normality; Genton, M.G., Ed.; Chapman & Hall/CRC: Boca Raton, FL, USA, 2004; pp. 339–358. [Google Scholar]
  18. Arellano-Valle, R.B.; Bolfarine, H.; Lachos, V.H. Skew-normal linear mixed models. J. Data Sci. 2005, 3, 415–438. [Google Scholar] [CrossRef]
  19. Lachos, V.H.; Ghosh, P.; Arellano-Valle, R.B. Likelihood based inference for skew-normal independent linear mixed models. Stat. Sin. 2010, 20, 303–322. [Google Scholar]
  20. Kheradmandi, A.; Rasekh, A. Estimation in skew-normal linear mixed measurement error models. J. Multivar. Anal. 2015, 136, 1–11. [Google Scholar] [CrossRef]
  21. Maleki, M.; Wraith, D.; Arellano-Valle, R.B. A flexible class of parametric distributions for Bayesian linear mixed models. Test 2019, 28, 543–564. [Google Scholar] [CrossRef]
  22. Ferreira, C.S.; Bolfarine, H.; Lachos, V.H. Linear mixed models based on skew scale mixtures of normal distributions. Commun. Stat. Simul. Comput. 2020, 1–21. [Google Scholar] [CrossRef]
  23. Cook, R.D. Assessment of local influence (with discussion). J. Roy. Stat. Soc. Ser. B 1986, 48, 133–169. [Google Scholar] [CrossRef]
  24. Zhu, H.; Lee, S. Local influence for incomplete-data models. J. Roy. Stat. Soc. Ser. B 2001, 63, 111–126. [Google Scholar] [CrossRef]
  25. Montenegro, L.C.; Bolfarine, H.; Lachos, V.H. Influence diagnostics for a skew extension of the Grubbs measurement error model. Comm. Stat. Simul. Comput. 2009, 38, 667–681. [Google Scholar] [CrossRef]
  26. Zeller, C.B.; Lachos, V.H.; Vilca, F.V. Influence diagnostics for Grubbs’s model with asymmetric heavy-tailed distributions. Stat. Pap. 2014, 55, 671–690. [Google Scholar] [CrossRef]
  27. Ferreira, C.S.; Lachos, V.H.; Bolfarine, H. Inference and diagnostics in skew scale mixtures of normal regression models. J. Stat. Comput. Simul. 2015, 85, 517–537. [Google Scholar] [CrossRef]
  28. Massuia, M.B.; Cabral, C.R.B.; Matos, L.A.; Lachos, V.H. Inference diagnostics for student-t censored linear regression models. Statistics 2015, 49, 1074–1094. [Google Scholar] [CrossRef]
  29. Bolfarine, H.; Montenegro, L.C.; Lachos, V.H. Influence diagnostics for skew-normal linear mixed models. Sankhyā Indian J. Stat. 2007, 69, 648–670. [Google Scholar]
  30. Montenegro, L.C.; Lachos, V.H.; Bolfarine, H. Local influence analysis for skew-normal linear mixed models. Commun. Stat. Theory Methods 2009, 38, 484–496. [Google Scholar] [CrossRef]
  31. Zeller, C.B.; Labra, F.V.; Lachos, V.H.; Balakrishnan, N. Influence analyses of skew-normal/independent linear mixed models. Comput. Stat. Data Anal. 2010, 54, 1266–1280. [Google Scholar] [CrossRef]
  32. Dempster, A.; Laird, N.; Rubin, D. Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc. Ser. B 1977, 39, 1–22. [Google Scholar] [CrossRef]
  33. Johnson, N.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions, 2nd ed.; Wiley: New York, NY, USA, 1994. [Google Scholar]
  34. Lu, B.; Song, X. Local influence of multivariate probit latent variable models. J. Multivar. Anal. 2006, 97, 1783–1798. [Google Scholar] [CrossRef] [Green Version]
  35. Lee, S.Y.; Xu, L. Influence analysis of nonlinear mixed-effects models. Comput. Stat. Data Anal. 2004, 45, 321–341. [Google Scholar] [CrossRef]
  36. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021; Available online: https://www.R-project.org/ (accessed on 1 November 2021).
  37. Wang, N.; Lin, X.; Gutierrez, R.G.; Carroll, R.J. Bias analysis and SIMEX approach in generalized linear mixed measurement error models. J. Amer. Statist. Assoc. 1998, 93, 249–261. [Google Scholar] [CrossRef]
  38. Labra, F.V.; Garay, A.M.; Lachos, V.H.; Ortega, E.M.M. Estimation and diagnostics for heteroscedastic nonlinear regression models based on scale mixtures of skew-normal distributions. J. Stat. Plan. Inference 2012, 142, 2149–2165. [Google Scholar] [CrossRef]
  39. Schumacher, F.L.; Dey, D.K.; Lachos, V.H. Approximate Inferences for Nonlinear Mixed Effects Models with Scale Mixtures of Skew-Normal Distributions. J. Stat. Theory Pract. 2021, 15, 60. [Google Scholar] [CrossRef]
  40. Schumacher, F.L.; Lachos, V.H.; Matos, L.A. Scale mixture of skew-normal linear mixed models with within-subject serial dependence. Stat. Med. 2021, 40, 1790–1810. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The histograms of the estimators of λ 1 , λ 2 and λ 3 for σ 1 2 = 0.5 2 and σ 2 = 0.4 2 . The left histograms are for n = 150 and the right histograms are for n = 300 .
Figure 1. The histograms of the estimators of λ 1 , λ 2 and λ 3 for σ 1 2 = 0.5 2 and σ 2 = 0.4 2 . The left histograms are for n = 150 and the right histograms are for n = 300 .
Mathematics 10 02820 g001
Figure 2. The histograms of the estimators of λ 1 , λ 2 and λ 3 for σ 1 2 = 0.8 2 and σ 2 = 0.9 2 . The left histograms are for n = 150 and the right histograms are for n = 300 .
Figure 2. The histograms of the estimators of λ 1 , λ 2 and λ 3 for σ 1 2 = 0.8 2 and σ 2 = 0.9 2 . The left histograms are for n = 150 and the right histograms are for n = 300 .
Mathematics 10 02820 g002
Figure 3. Index plots of M(0) for the perturbation of the response variable for the four scenarios. Dotted lines show the yardstick for M(0) with c * = 3 .
Figure 3. Index plots of M(0) for the perturbation of the response variable for the four scenarios. Dotted lines show the yardstick for M(0) with c * = 3 .
Mathematics 10 02820 g003
Figure 4. Index plots of M(0) for the perturbation of the dispersion matrix of the errors for the four scenarios. Dotted lines show the yardstick for M(0) with c * = 3 .
Figure 4. Index plots of M(0) for the perturbation of the dispersion matrix of the errors for the four scenarios. Dotted lines show the yardstick for M(0) with c * = 3 .
Mathematics 10 02820 g004
Figure 5. Index plots of M(0) for the perturbation of the first column of X for the four scenarios. Dotted lines show the yardstick for M(0) with c * = 3 .
Figure 5. Index plots of M(0) for the perturbation of the first column of X for the four scenarios. Dotted lines show the yardstick for M(0) with c * = 3 .
Mathematics 10 02820 g005
Figure 6. Index plots of M(0) for the perturbation of the second column of X for the four scenarios. Dotted lines show the yardstick for M(0) with c * = 3 .
Figure 6. Index plots of M(0) for the perturbation of the second column of X for the four scenarios. Dotted lines show the yardstick for M(0) with c * = 3 .
Mathematics 10 02820 g006
Table 1. Mean and SD of each estimator for both cases with n = 150 , σ 2 = 0.4 2 and σ 1 2 = 0.5 2 .
Table 1. Mean and SD of each estimator for both cases with n = 150 , σ 2 = 0.4 2 and σ 1 2 = 0.5 2 .
NormalSN
ParameterTrue ValueMeanSDMeanSD
β 1 −1−1.00310.0409−0.99900.0385
β 2 21.99920.04112.00190.0406
σ 2 0.160.18710.02330.15800.0222
σ 1 2 0.250.21980.06110.24550.0575
λ 1 −3--−3.12390.3321
λ 2 0--−0.00830.0963
λ 3 3--2.91460.3275
Table 2. Mean and SD of each estimator for both cases with n = 300 , σ 2 = 0.4 2 and σ 1 2 = 0.5 2 .
Table 2. Mean and SD of each estimator for both cases with n = 300 , σ 2 = 0.4 2 and σ 1 2 = 0.5 2 .
NormalSN
ParameterTrue ValueMeanSDMeanSD
β 1 −1−0.99890.0298−1.00120.0268
β 2 21.99780.02972.00010.0274
σ 2 0.160.18930.01670.15870.0160
σ 1 2 0.250.22120.03890.25050.0433
λ 1 −3--−2.95640.2249
λ 2 0--0.00150.0635
λ 3 3--3.06500.2268
Table 3. Mean and SD of each estimator for both cases with n = 150 , σ 2 = 0.8 2 and σ 1 2 = 0.9 2 .
Table 3. Mean and SD of each estimator for both cases with n = 150 , σ 2 = 0.8 2 and σ 1 2 = 0.9 2 .
NormalSN
ParameterTrue ValueMeanSDMeanSD
β 1 −1−0.99950.08370.99880.0800
β 2 21.99890.08201.99800.0778
σ 2 0.640.74620.09420.62040.0868
σ 1 2 0.810.68720.20890.81380.1884
λ 1 −3--−2.90000.6506
λ 2 0--0.00560.2007
λ 3 3--2.90320.6644
Table 4. Mean and SD of each estimator for both cases with n = 300 , σ 2 = 0.8 2 and σ 1 2 = 0.9 2 .
Table 4. Mean and SD of each estimator for both cases with n = 300 , σ 2 = 0.8 2 and σ 1 2 = 0.9 2 .
NormalSN
ParameterTrue ValueMeanSDMeanSD
β 1 −1−0.99700.0576−0.99820.0545
β 2 22.00190.05791.99810.0547
σ 2 0.640.75800.06700.63300.0644
σ 1 2 0.810.68830.13360.80590.1432
λ 1 −3--−3.06790.2716
λ 2 0--−0.00490.1386
λ 3 3--2.95070.2694
Table 5. Metallic oxide summary results for four scenarios.
Table 5. Metallic oxide summary results for four scenarios.
Parameter(a)(b)(c)(d)
Type 1 mean 3.865 3.858 3.863 3.856
Type 2 mean 3.064 3.107 3.064 3.134
Lot variance 0.175 0.123 0.607 0.565
Sample variance 0.043 0.044
Chemist variance 0.032 0.033
Analysis variance 0.045 0.041 0.043 0.03 7
Skewness parameter- 4.396 - 3.674
l θ 122.762 118.602 91.322 88.176
AIC 253.524 247.204 194.644 190.352
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Najafi, Z.; Zare, K.; Mahmoudi, M.R.; Shokri, S.; Mosavi, A. Inference and Local Influence Assessment in a Multifactor Skew-Normal Linear Mixed Model. Mathematics 2022, 10, 2820. https://doi.org/10.3390/math10152820

AMA Style

Najafi Z, Zare K, Mahmoudi MR, Shokri S, Mosavi A. Inference and Local Influence Assessment in a Multifactor Skew-Normal Linear Mixed Model. Mathematics. 2022; 10(15):2820. https://doi.org/10.3390/math10152820

Chicago/Turabian Style

Najafi, Zeinolabedin, Karim Zare, Mohammad Reza Mahmoudi, Soheil Shokri, and Amir Mosavi. 2022. "Inference and Local Influence Assessment in a Multifactor Skew-Normal Linear Mixed Model" Mathematics 10, no. 15: 2820. https://doi.org/10.3390/math10152820

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop