Next Article in Journal
A New Methodology for the Development of Efficient Multistep Methods for First–Order IVPs with Oscillating Solutions IV: The Case of the Backward Differentiation Formulae
Previous Article in Journal
A Note on Some Novel Laplace and Stieltjes Transforms Associated with the Relaxation Modulus of the Andrade Model
Previous Article in Special Issue
Transfer Learning for Logistic Regression with Differential Privacy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pretest Estimation for the Common Mean of Several Normal Distributions: In Meta-Analysis Context

by
Peter M. Mphekgwana
1,2,*,
Yehenew G. Kifle
3 and
Chioneso S. Marange
1
1
Department of Statistics, Faculty of Science and Agriculture, Fort Hare University, Alice 5700, South Africa
2
Department of Research Administration and Development, University of Limpopo, Polokwane 0727, South Africa
3
Department of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, MD 21250, USA
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(9), 648; https://doi.org/10.3390/axioms13090648
Submission received: 1 July 2024 / Revised: 11 September 2024 / Accepted: 20 September 2024 / Published: 22 September 2024
(This article belongs to the Special Issue Probability, Statistics and Estimation)

Abstract

:
The estimation of unknown quantities from multiple independent yet non-homogeneous samples has garnered increasing attention in various fields over the past decade. This interest is evidenced by the wide range of applications discussed in recent literature. In this study, we propose a preliminary test estimator for the common mean ( μ ) with unknown and unequal variances. When there exists prior information regarding the population mean with consideration that μ might be equal to the reference value for the population mean, a hypothesis test can be conducted: H 0 : μ = μ 0 versus H 1 : μ μ 0 . The initial sample is used to test H 0 , and if H 0 is not rejected, we become more confident in using our prior information (after the test) to estimate μ . However, if H 0 is rejected, the prior information is discarded. Our simulations indicate that the proposed preliminary test estimator significantly decreases the mean squared error (MSE) values compared to unbiased estimators such as the Garybill-Deal (GD) estimator, particularly when μ closely aligns with the hypothesized mean ( μ 0 ). Furthermore, our analysis indicates that the proposed test estimator outperforms the existing method, particularly in cases with minimal sample sizes. We advocate for its adoption to improve the accuracy of common mean estimation. Our findings suggest that through careful application to real meta-analyses, the proposed test estimator shows promising potential.
MSC:
62H15; 62E15; 62F03; 62H10

1. Introduction

As medical knowledge continues to expand rapidly, healthcare providers face significant challenges in thoroughly evaluating and analyzing the necessary data to make well-informed decisions [1,2,3]. The complexity of these challenges is further heightened by the variety of findings, presented in different studies, which are sometimes conflicting. Meta-analysis, along with research synthesis or integration, has become an effective tool for addressing these issues. This method achieves its objective by applying rigorous statistical techniques to aggregate the results from multiple individual studies, thereby combining their findings [2,4]. Additionally, meta-analysis has gained widespread attention across numerous scientific fields, such as education, social sciences, and medicine. For example, in education has been used to consolidate research on the effectiveness of coaching in improving Scholastic Aptitude Test (SAT) scores in both verbal and mathematical sections [5]. In social sciences, it has been used to synthesize studies on gender differences in quantitative, verbal, and visual-spatial abilities [6]. In healthcare, meta-analysis has been particularly valuable during the COVID-19 pandemic, enhancing our understanding of the virus and informing public health strategies [7,8].
The challenge of combining two or more unbiased estimators is a common issue in applied statistics, with significant implications across various fields. A notable example of this problem occurred when Meier [9] was tasked with making inferences about the mean albumin level in plasma protein in human subjects using data from four separate experiments. Similarly, Eberhardt et al. [10] faced a scenario where they needed to draw conclusions about the mean selenium content in non-fat milk powder by integrating results from four different methods across four experiments.
Most of the early research on drawing inferences about the common mean ( μ ) focuses on point estimation and theoretical decision rules regarding μ . Graybill and Deal were the first among a few researchers to research about estimating μ [11]. Since then, numerous works have been building upon and expanding upon their initial work [12,13,14,15,16,17,18], along with the related references. Conversely, Meier [9] developed a method for estimating the confidence interval for μ . In addition, refs. [19,20] have devised approximate confidence intervals. The properties of such estimators have accumulated substantial attention in the literature. Sinha et al. [2] derived an unbiased estimator of the variance for the Graybill-Deal estimator, and Krishnamoorthy and Moore [21] considered this in the prediction problem of linear regression.
In some cases, researchers find situations where prior information ( θ 0 ) on the mean population is available, whether through pre-test information or historical data. Pretest or preliminary tests or shrinkage estimators involve the concept of leveraging preliminary information to improve parameter estimation accuracy. These estimators work with the idea of borrowing strength from both sample data and pre-test information, resulting in higher efficiency and reliability than traditional estimators. Bancroft [22] and Stein [23] introduced and extensively examined the preliminary test shrinkage estimator. Their method has influenced numerous advancements and applications in statistics and has established a basis for the use of shrinkage estimators in contemporary statistical practice [24,25,26]. Thompson [27] proposed a shrinkage technique given as
ω = q θ 0 + ( 1 q ) θ ,
where q = 0 (accept H 0 ), q = 1 (reject H 0 ) and θ 0 as prior guess. This was aimed at improving the current estimator of a parameter θ to estimate the mean, thereby reducing the mean square error (MSE) of the uniform minimum-variance unbiased estimator (UMVUE) for the population mean. It has been observed that the shrinkage estimator performs better than the conventional estimator when the assumed value of q aligns closely with the prior guess. Consequently, instead of treating q as a constant value in the shrinkage estimator, it is advisable to regard it as a weight ranging between 0 and 1 [27]. In this context, q can be viewed as a continuous function dependent on certain pertinent statistics, anticipating that its value will decrease consistently as the deviation ( θ θ 0 ) from a reference value increases.
This preliminary test has been widely used in statistics [24,25,26]. Khan et al. [24] deployed a preliminary test for estimating the mean of a univariate normal population with an unknown variance. Shih et al. [25] proposed a class of general pretest estimators for the univariate normal mean which included numerous existing estimators, such as pretest, shrinkage, Bayes, and empirical Bayes estimators. In the context of meta-analysis, Taketomi et al. [26] proposed simultaneous estimation of individual means using the James–Stein shrinkage estimators, which improved upon individual studies’ estimators. Literature has observed that when prior information is available, shrinkage estimators for parameters of various distributions tend to outperform standard estimators in terms of MSE, especially when the estimated value is close to the true value [22,27,28].
The use of prior information in estimating the common mean has several significant advantages. For example, it allows researchers to leverage significant past knowledge, which could be from historical data, expert opinion, or preliminary investigations, improving the accuracy of the estimation process. Secondly, they tend to strike a compromise between bias and variance, resulting in estimates that are both unbiased and more efficient than traditional estimators, particularly in circumstances with small sample sizes. However, there are limited research studies on point estimation of μ proposing preliminary test-based estimators. It is therefore, in this study we propose a preliminary test estimator for the common mean with unknown and unequal variances. In order to find the ideal estimator, the properties of the proposed preliminary test estimator will be examined, which includes its theoretical basis and performance-based criteria such as bias and MSE.

2. Background

To define the current problem, we assume there are k independent normal populations with a common mean ( μ ) , but with unknown and potentially unequal variances σ 1 2 , , σ k 2 > 0 . We assume we have independent and identically distributed ( i . i . d ) observations X i 1 , , X i n i from N ( μ , σ i 2 ) , i = 1 , 2 , , k and we define X ¯ i and S i 2 as
X ¯ i = j = 1 n i X i j n i ,
S i 2 = j = 1 n i ( X i j X ¯ i ) 2 / ( n i 1 ) ,
where X ¯ i N ( μ , σ i 2 / n i ) , ( n i 1 ) S i 2 σ i 2 χ n i 1 2 . Note that these statistics { X ¯ i , S i 2 , i = 1 , 2 } are all mutually independent. Again, it can be noted that { X ¯ 1 , S 1 2 , X ¯ 2 , S 2 2 , , X ¯ k , S k 2 } are minimal sufficient statistics for ( μ , σ 1 2 , σ 2 2 , , σ k 2 ) but not complete [29]. As a result, one cannot get the uniformly minimum variance unbiased estimator (UMVUE) if it exists using the standard Rao-Blackwell theorem on an unbiased estimator for estimating μ . For the case of k when the population variances are fully known, μ can be readily estimated as
μ ^ = i = 1 k n i σ i 2 X ¯ i / i = 1 k n i σ i 2 ,
Var μ ^ = 1 i = 1 k n i / σ i 2 .
This estimator, μ ^ , is the UMVUE, the best linear unbiased estimator (BLUE), and the maximum likelihood estimator (MLE). In the context of our current problem, where the population variances are unknown and possibly unequal, the most appealing unbiased estimator for μ is the Graybill-Deal (GD) estimator [11], which is
μ ^ G D = i = 1 k n i S i 2 X ¯ i / i = 1 k n i S i 2 ,
Var μ ^ G D = E i = 1 k n i σ i 2 S i 4 / i = 1 k n i S i 2 2 .
In the case of two samples, GD [11] first demonstrated that an unbiased estimator μ ^ G D in Equation (6) has a lower variance compared to either sample mean, provided that both sample sizes exceed 10.
Khatri and Shah [30] proposed an exact variance formula for μ ^ G D which is complex and not easily applied. To tackle this inference issue, Meier [9] derived a first-order approximation of the variance of μ ^ G D , given by
V a r μ ^ G D = i = 1 k n i σ i 2 1 1 + 2 i = 1 k 1 n i 1 c i 1 c i + O i = 1 k 1 n i 1 2 ,
where c i = n i / σ i 2 j = 1 k n j / σ j 2 .
A few years later, Sinha [31] developed an unbiased estimator for the variance of μ ^ that takes the form of a convergent series. A first-order approximation of this estimator is
V a r ^ ( 1 ) μ ^ G D = 1 i = 1 k n i / s i 2 1 + i = 1 k 4 n i + 1 n i / s i 2 j = 1 k n j / s j 2 n i 2 / s i 4 j = 1 k n j / s j 2 2 .
The above estimator is comparable to Meier’s [9] approximate estimator, defined as
V a r ^ ( 2 ) μ ^ G D = 1 i = 1 k n i / s i 2 1 + i = 1 k 4 n i 1 n i / s i 2 j = 1 k n j / s j 2 n i 2 / s i 4 j = 1 k n j / s j 2 2 .
The “classical” meta-analysis variance estimator, V a r ^ ( 3 ) is given as
V a r ^ ( 3 ) μ ^ G D = 1 i = 1 k n i / s i 2 .
Approximate variance estimator proposed by Hartung [32], V a r ^ ( 4 ) is given as
V a r ^ ( 4 ) μ ^ G D = 1 k 1 i = 1 k n i / s i 2 j = 1 k n j / s j 2 X ¯ i μ ^ G D 2 .

3. Proposed Preliminary Test Estimator

It is reasonable to test a null hypothesis when uncertain non-sample prior information is available. A preliminary test estimator is a two-step process that estimates a key parameter using the results of a preliminary test. To estimate μ , we consider the hypothesis
H 0 : μ = μ 0 v s . H 1 : μ μ 0 .
Our proposed preliminary test estimate for μ is as follows:
μ ^ P T = μ 0 , if H 0 is accepted μ ^ G D = i = 1 k n i s i 2 X ¯ i / i = 1 k n i s i 2 , if H 0 is rejected
where μ ^ G D is unbiased estimator of μ . Shih et al. [25] defined a : R [ 0 , 1 ] be a test function with a = 0 ( accept H 0 ) , a = 1 ( reject H 0 ) , and 0 < a < 1 ( reject H 0 with probability a ) . For 0 α 1 α 2 1 , then randomized test is defined as
a ( X ) = 1 , if | t o b s |   > t α 1 / 2 , n 1 , q ( X ) , if t α 2 / 2 , n 1 < | t o b s | t α 1 / 2 , n 1 , 0 , if | t o b s |   t α 2 / 2 , n 1 .
The rejection or failure to reject H 0 will be based on the t statistic. A standard notation for a t statistic based on a sample of size n is t o b s = n x ¯ μ 0 / s . We can refer to this t computed from a specific set of data as the observed value of our test statistic, and reject H 0 when | t o b s |   > t α / 2 , ν , where ν is n 1 degrees of freedom and α is Type-I error level. A test for H 0 based on a p-value on the other hand is based on P o b s = P [ | t ν |   >   | t o b s | ] , and we reject H 0 at level α if P o b s < α . We let t 0 , ν as usual. Then t ν stands for the central t variable with ν degrees of freedom, and t α / 2 , ν stands for the upper α / 2 percentile of t ν . The general preliminary test estimator [33] can be defined as
μ ^ G P T = a ( X ) μ ^ G D + [ 1 a ( X ) ] μ 0 .
The estimator can also be written as
μ ^ G P T = μ 0 + ( μ ^ G D μ 0 ) I ( | t o b s |   > t α 1 / 2 , n 1 ) + q ( X ) I ( t α 2 / 2 , n 1 < | t o b s | t α 1 / 2 , n 1 ) .
In this study, we focus only on the case where α 1 = α 2 = α . We can define our proposed preliminary test estimator with unknown variance as
μ ^ P T = μ 0 + ( μ ^ G D μ 0 ) I ( | t r a n |   > t α / 2 , n 1 ) ,
where I ( · ) is the indicator function defined as I ( A ) = 1 if A is true and I ( A ) = 0 if A is false. A random p-value which has a U n i f o r m ( 0 , 1 ) distribution under H 0 is defined as P r a n = P [ | t ν |   >   | t r a n | ] , where t r a n = n X ¯ μ 0 / s . Most suggested tests for H 0 are based on P o b s and t o b s values. To simplify the notation, we will denote P o b s by small p and P r a n by large P. In our context, we have independent t statistics, t 1 , , t k , and also independent p-values, P 1 , , P k . In the following, we suggest various test procedures for testing μ = μ 0 based on suitable combinations of t i s and P i s [34]. Depending on the test procedure we use, the rejection set A will be defined and used to compute the Bias and MSE of the preliminary test estimator of the common mean μ .

3.1. P-Value Based Exact Tests

Suppose P ( 1 ) , , P ( k ) are independent p-values obtained from k continuous distributions of test statistics, then when individual hypothesis H 0 i is true, P i is uniformly distributed over the interval 0 , 1 . Testing the joint null hypothesis H 0 : μ = μ 0 versus H 1 : μ μ 0 . Five p-value-based exact tests based on t o b s and p-value from k independent studies as available in the literature are listed below.

3.1.1. Tippett’s Test

Suppose P ( 1 ) , , P ( k ) are independent and ordered p-values. Then H 0 is rejected if P ( 1 ) < α . If the overall significance level is α then α = 1 ( 1 α ) 1 k . Interestingly, this test is equivalent to the test based on M T = m a x 1 i k t i suggested by Cohen and Sackrowitz [35]. This S T = m i n ( P ( 1 ) , , P ( k ) ) test was proposed by Tippet et al. [36] also called the union-intersection.

3.1.2. Wilkinson’s Test

Wilkinson [37] provided a generalization of Tippett’s test, where P ( 1 ) P ( 2 ) P ( k ) are ordered p-values with r t h the smallest p-value, P ( r ) as a test statistic. The common mean null hypothesis H 0 : μ = μ 0 will be rejected if P ( r ) < d r , α , where P ( r ) follows a beta distribution with parameters r and k r + 1 under the null hypothesis and d r , α satisfies P r ( P ( r ) < d r , α H 0 ) = α . This generates a series of tests for various values of r = 1 , 2 , , k .

3.1.3. Inverse Normal Test

Stouffer et al. [38], reported that the Inverse Normal test procedure involves transforming the p-values to the corresponding standard normal distributions. The test statistic is defined as Z = 1 k i = 1 k ϕ 1 P i , where ϕ is the standard normal cumulative distribution function (CDF). The common mean null hypothesis H 0 : μ = μ 0 will be rejected if Z < z α , where z α denotes the upper α level cut-off point of the standard normal distribution.

3.1.4. Fisher’s Inverse χ 2 -Test

Fisher [39] noted that the test statistic t F = 2 i = 1 k ln P i = 2 ln i = 1 k P i has a χ 2 distribution with 2 k degrees of freedom when H 0 is true. This procedure uses the i = 1 k P i to combine the k independent p-values. The common mean null hypothesis H 0 : μ = μ 0 will be rejected if t F > χ 2 k , α 2 , where χ 2 k , α 2 denotes the upper α critical value of χ 2 -distribution with 2 k degrees of freedom.

3.1.5. The Logit Test

This exact test procedure which involves transforming each p-value into a logit was proposed by Mudholker and George [40]. The test statistic is defined as G = i = 1 k ln ( P i / [ 1 P i ] ) ( 3 / k π 2 ) 1 / 2 , where G follows student’s t-distribution with 5 k + 4 degrees of freedom. The common mean null hypothesis H 0 : μ = μ 0 is rejected if G > z 1 α .

3.2. Exact Tests

3.2.1. Modified t

Fairweather [41] suggested using a weighted linear combination of the t i s namely, T 1 = i = 1 k w 1 i t i , where w 1 i = Var t i 1 i = 1 k Var t i 1 , with Var ( t i ) = [ ( ν i ( ν i 2 ) 1 ) ( [ Γ ν i 1 2 ν i ] Γ ν i 2 π 1 ) 2 ] . The null hypothesis H 0 : μ = μ 0 is rejected if T 1 > d 1 α , where P r T 1 > d 1 α H 0 = α with d 1 α computed by simulation.

3.2.2. Modified F

Jordan and Krishnamoorthy [42] suggested using linear combinations of the F i ’s namely, T 2 = i = 1 k w 2 i F i , where w 2 i = ( Var ( F i ) ) 1 i = 1 k Var F i 1 , with Var ( F i ) = [ 2 ν i 2 ( ν i 1 ) ] [ ( ν i 2 ) 2 ν i 4 ] 1 for ν i > 4 . The null hypothesis H 0 : μ = μ 0 will be rejected if T 2 > d 2 α , where P r T 2 > d 2 α H 0 = α with d 2 α computed by simulation.

3.3. Properties of the Proposed Preliminary Test Estimator

3.3.1. Bias

Bias of the proposed preliminary test estimator is equal to E [ μ ^ P T μ ] , where
E ( μ ^ P T ) = E [ μ ^ P T ] μ = E [ μ 0 + ( μ ^ G D μ 0 ) I ( A ) ] μ = μ 0 E [ ( 1 I ( A ) ) ] + E [ μ ^ G D I ( A ) ] μ .
Given that the rejection of H 0 and μ ^ G D are dependent upon sample mean and sample variance X ¯ i and S i 2 , respectively, it may be concluded that μ ^ G D and I ( · ) are not mutually independent.

3.3.2. Mean Square Error

The MSE of μ ^ P T can be expressed as
MSE ( μ ^ P T ) = E [ ( μ ^ P T μ ) 2 ] = Var ( μ ^ P T μ ) + ( E [ μ ^ P T μ ] ) 2 = Var ( μ ^ P T ) + ( μ 0 E [ ( 1 I ( A ) ) ] + E [ μ ^ G D I ( A ) ] μ ) 2 = μ 0 2 Var [ I ( A c ) ] + Var [ μ ^ G D I ( A ) ] + ( μ 0 E [ I ( A c ) ] + E [ μ ^ G D I ( A ) ] μ ) 2 .

4. Simulation Study

Bias and Mean Squared Error

We will now assess the effectiveness of the proposed preliminary test estimator ( μ ^ P T ) performs in terms of bias and MSE. To achieve a high level of accuracy, each simulated bias and MSE value was calculated using Q = 10 5 replications, resulting in an exceptionally large simulation. It should be noted that MSE and relative efficiency (RE) of the proposed preliminary test estimator are functions of n 1 , n 2 and δ = σ 1 2 / σ 2 2 . Among these parameters, n represents the sample size, and δ is the estimated value of the parameter used in the proposed preliminary test estimator. These extensive computations were carried out using statistical software R [43]. The procedure for our proposed preliminary test estimator of common mean is defined as:
  • Select two positive integers n 1 and n 2 .
  • Generate independent random observations X 1 i , i = 1 , , n 1 and X 2 i , i = 1 , , n 2 .
  • Test H 0 : μ = μ 0 versus H 1 : μ μ 0 at significance level α using p-value and exact based tests in Section 3 for H 0 versus H 1 .
  • If we fail to reject H 0 , we take the estimator of μ ^ P T = μ 0 . However, if H 0 is rejected we take the estimator of μ ^ P T as μ ^ G D .
  • The effectiveness of this proposed estimator is assessed using the simulated bias as Q 1 q Q μ ^ q μ and simulated MSE as Q 1 q Q μ ^ q μ 2 .
The expression provided in Equation (12) for bias, can be computed for various values of δ = ( 0.6 , 1.0 , 1.2 and 2.0 ) , μ = ( 1.0 , 0.0 and 1.0 ) , and n 1 , n 2 = ( 10 , 15 , 20 , 25 , 50 , 60 and 100 ) . Without loss of generality, in our computed simulated bias and MSE, we set α = 0.05 , μ 0 = 0 and σ 2 2 = 1 .
Remark 1.
Table 1, Table 2 and Table 3 provide some illustrative values. Generally, it is observed that as δ increases, the bias increases in magnitude for unequal sample sizes. Ultimately, for a value of δ close to 1, the bias approaches zero. Furthermore, the comparison of tables reveals that as the sample size increases, the magnitude of bias decreases. Furthermore, as μ deviates further from μ 0 , the bias becomes larger and is dependent on n 2 . In particular, when n > 25 and δ < 2 , the bias of our proposed test estimator appears to approach zero. Furthermore, as μ deviates further from μ 0 , the MSE becomes larger and independent on n 2 when μ = μ 0 .
Remark 2.
Table 1, Table 2 and Table 3 illustrate the changes in MSE with respect to both δ and sample size. Specifically, an increase in δ leads to a corresponding increase in MSE. Furthermore, the comparison across the tables shows that as the sample size grows, the MSE decreases accordingly. The minimum MSE is consistently observed when the estimated value is close to the true value μ = μ 0 , regardless of the test performed. It is also noteworthy that the MSE values are nearly identical across all p-value-based tests, except for the Inverse Normal test. On the other hand, the modified exact tests tend to produce higher MSE values compared to their P-value-based counterparts.
Remark 3.
To evaluate the performance of the proposed preliminary test estimator ( μ ^ P T ) in comparison to the conventional single-stage estimator ( μ ^ G D ) using equal sample sizes ( n 1 = n 2 = n ) and a fixed significance level ( α = 0.05 ), it is observed that as the sample size (n) increases, the MSE generally decreases. Notably, when μ is closer to the hypothesized mean ( μ 0 ), the preliminary test estimator outperforms the unbiased estimator across various values of δ. This range of values where the preliminary test estimator excels can be referred to as its effective interval. After reaching a minimum at μ = 0 , a slight rise in MSE is observed as μ deviates further from μ 0 . This trend is evident in the results depicted in Figure 1a,b, indicating that for δ = 1.2 and δ = 0.6 , the proposed estimator performs better than the unbiased estimator when 0.2 μ 0.2 . Conversely, for δ = 1.2 and δ = 0.6 , the proposed estimator outperforms the unbiased estimator when 0.12 μ 0.12 and 0.08 μ 0.08 , respectively (as shown in Figure 1c,e for n = 30 ). Again, for δ = 1.2 and δ = 0.6 , the proposed estimator outperforms the unbiased estimator when 0.12 μ 0.12 and 0.06 μ 0.06 , respectively (as shown in Figure 1d,f for n = 60 ). The preliminary test estimator, employing Tippett, Wilkinson (r = 2), Fisher’s inverse χ 2 , logit, and modified t tests, demonstrates satisfactory performance within its effective interval, as indicated by MSE values. These findings are consistent with the conclusions drawn by Kifle et al. (2021) regarding the efficacy of Fisher’s inverse χ 2 and modified t tests across various sample sizes and significance levels [33].

5. Application in Biological Research

To demonstrate the practical applicability of the proposed preliminary test estimator, we analyzed data from four experiments used to estimate the percentage of albumin in plasma protein of normal human subjects. This dataset is reported in Meier [9] and appears in Table 4. For this dataset, previous studies focusing on the test problem [44,45], have compared the various test procedures for testing H 1 : μ = 59.50 versus H 2 : μ 59.50 .
In our scenario, we could consider 59.50 as our non-sample prior information and apply our proposed preliminary test estimator to address this issue. According to the findings presented in Table 5, the estimated mean ( μ ^ P T ) derived from p-value based tests (including Tippett’s, Wilkinson ( r = 3 and r = 4 ), Inverse normal, and Fisher’s tests) notably integrates the non-sample prior information.
In our second application of the proposed preliminary test estimator, we analyzed the data from four experiments about non-fat milk powder. This data set is reported by Eberhardt et al. [10] and appears in Table 6. We can compute values of μ P T for different values of μ 0 with fixed sampling values, based on P-value and modified exact tests. The resulting values are shown in Table 7.
The findings presented in Table 7 suggest that when μ 0 is below 110.00, μ ^ P T = μ ^ G D . However, when μ 0 falls within the range of 110.00 to 110.50, tests including Tippett’s, Wilkinson’s ( r = 2 , r = 3 , and r = 4 ), Fisher’s, and the logit tests do not reject the null hypothesis ( H 0 ), indicating an estimated common mean ( μ ) of equal to 110.00, whereas other tests reject H 0 , estimating the common mean μ equal to 109.60. For μ 0 = 111.00 , tests based on Wilkinson’s ( r = 2 and r = 3 ) and the modified F tests also fail to reject H 0 , with an estimated μ equal to 110.00. Both the Inverse normal test and the Modified t test accepted the null hypothesis for various values of μ 0 . This may be because the Inverse normal test transforms p-values into z-scores and combines them, whereas the Modified t test adjusts the traditional t test procedure to address specific issues such as heteroscedasticity or small sample sizes.
From the above results, we do not intend to make any broad conclusions here, but our simulation results suggest that our proposed preliminary test estimator based on Tippett’s, Wilkinson’s ( r = 2 , r = 3 , and r = 4 ), Fisher’s, and the logit tests are feasible and could be applied to this specific case if prior information about the population mean is available.

6. Conclusions

The past decade has witnessed increased interest in estimating unknown quantities using data from multiple independent yet non-homogeneous samples. This approach finds application across various domains, as evidenced by the diverse range of applications discussed in the most recent book by Sinah et al. [2]. In this study, we introduce a preliminary test estimator that integrates non-sample prior information. Our simulations indicate that this proposed estimator exhibits distinct advantages in certain scenarios, particularly when dealing with very small sample sizes and situations where σ 2 2 exceeds σ 1 2 . Notably, the proposed estimator significantly reduces MSE values compared to traditional unbiased estimators, especially when μ is in proximity to μ 0 . Moreover, the performance of the proposed estimator, when based on Tippett’s, Wilkinson’s ( r = 2 , r = 3 , and r = 4 ), Fisher’s, and logit tests, surpasses that of μ ^ G D , particularly in cases involving very small sample sizes. For substantial sample sizes, the effectiveness of the suggested estimator, deploying Inverse normal and modified F tests, appeared to demonstrate consistent and dependable performance, MSE discrepancy of less than 0.02 compared to the MSE of the unbiased estimator. Consequently, we advocate for the adoption of the proposed estimator to enhance the accuracy of μ estimation. Nevertheless, no universally optimal estimator performs best across all scenarios. Consequently, it becomes crucial to select an appropriate estimator tailored to each specific scenario. The decision on which estimator to employ relies on the objectives of the research, making it challenging to devise a purely statistical strategy for selection. Our findings in this article suggest that through careful application to real meta-analyses, the proposed estimator exhibits promising potential.
This article primarily considered the scenario under the general preliminary test estimator whereby α 1 = α 2 = α . Extensions of this work could explore cases where α 1 = 0 and α 2 = 1 through the introduction of a randomized test, where the probability function q ( · ) is treated as a shrinkage parameter. Consequently, the proposed estimator would transition to a non-randomized form [25]. Additionally, it’s pertinent to highlight that this study focuses on the univariate common mean of multiple normal populations. Future extensions could broaden the scope to encompass multiple responses, such as bivariate common mean.

Author Contributions

Conceptualization, P.M.M. and Y.G.K.; methodology, P.M.M. and Y.G.K.; software, P.M.M.; validation, P.M.M. and Y.G.K.; formal analysis, P.M.M.; investigation, P.M.M. and Y.G.K.; resources, P.M.M.; data curation, P.M.M.; writing—original draft preparation, P.M.M.; writing—review and editing, P.M.M., Y.G.K. and C.S.M.; visualization, P.M.M.; supervision, Y.G.K. and C.S.M.; project administration, Y.G.K. and C.S.M.; funding acquisition, Y.G.K. and C.S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the University Staff Doctoral Programme (USDP) hosted by the University of Limpopo in collaboration with the University of Maryland Baltimore County. Again, the first author acknowledges the financial support from the Research and Innovation Department of the University of Fort Hare.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors extend their gratitude to Bimal Sinha of the University of Maryland in Baltimore County, USA, for his insightful guidance and support. Our heartfelt thanks go to three reviewers for their excellent comments, which helped us clarify several key points and enhance the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, X.M.; Zhang, X.R.; Li, Z.H.; Zhong, W.F.; Yang, P.; Mao, C. A brief introduction of meta-analyses in clinical practice and research. J. Gene Med. 2021, 23, e3312. [Google Scholar] [CrossRef] [PubMed]
  2. Sinha, B.K.; Hartung, J.; Knapp, G. Statistical Meta-Analysis with Applications; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  3. Haidich, A.B. Meta-analysis in medical research. Hippokratia 2010, 14, 29. [Google Scholar] [PubMed]
  4. Glass, G.V. Primary, secondary, and meta-analysis of research. Educ. Res. 1976, 5, 3–8. [Google Scholar] [CrossRef]
  5. DerSimonian, R.; Laird, N. Evaluating the effect of coaching on SAT scores: A meta-analysis. Harv. Educ. Rev. 1983, 53, 1–15. [Google Scholar] [CrossRef]
  6. Hedges, L.V. Advances in statistical methods for meta-analysis. New Dir. Program Eval. 1984, 24, 25–42. [Google Scholar] [CrossRef]
  7. Liu, Q.; Qin, C.; Liu, M.; Liu, J. Effectiveness and safety of SARS-CoV-2 vaccine in real-world studies: A systematic review and meta-analysis. Infect. Dis. Poverty 2021, 10, 132. [Google Scholar] [CrossRef]
  8. Watanabe, A.; Kani, R.; Iwagami, M.; Takagi, H.; Yasuhara, J.; Kuno, T. Assessment of efficacy and safety of mRNA COVID-19 vaccines in children aged 5 to 11 years: A systematic review and meta-analysis. JAMA Pediatr. 2023, 177, 384–394. [Google Scholar] [CrossRef]
  9. Meier, P. Variance of a weighted mean. Biometrics 1953, 9, 59–73. [Google Scholar] [CrossRef]
  10. Eberhardt, K.R.; Reeve, C.P.; Spiegelman, C.H. A minimax approach to combining means, with practical examples. Chemom. Intell. Lab. Syst. 1989, 5, 129–148. [Google Scholar] [CrossRef]
  11. Graybill, F.A.; Deal, R. Combining unbiased estimators. Biometrics 1959, 15, 543–550. [Google Scholar] [CrossRef]
  12. Kubokawa, T. Admissible minimax estimation of a common mean of two normal populations. Ann. Stat. 1987, 1245–1256. [Google Scholar] [CrossRef]
  13. Brown, L.D.; Cohen, A. Point and confidence estimation of a common mean and recovery of interblock information. Ann. Stat. 1974, 2, 963–976. [Google Scholar] [CrossRef]
  14. Cohen, A.; Sackrowitz, H.B. On estimating the common mean of two normal distributions. Ann. Stat. 1974, 1274–1282. [Google Scholar] [CrossRef]
  15. Moore, B.; Krishnamoorthy, K. Combining independent normal sample means by weighting with their standard errors. J. Stat. Comput. Simul. 1997, 58, 145–153. [Google Scholar] [CrossRef]
  16. Huang, H. Combining estimators in interlaboratory studies and meta-analyses. Res. Synth. Methods 2023, 14, 526–543. [Google Scholar] [CrossRef] [PubMed]
  17. Dong, Y.F.; Chen, W.X.; Xie, M.Y. Best linear unbiased estimators of location and scale ranked set parameters under moving extremes sampling design. Acta Math. Appl. Sin. Engl. Ser. 2023, 39, 222–231. [Google Scholar] [CrossRef]
  18. Khatun, H.; Tripathy, M.R.; Pal, N. Hypothesis testing and interval estimation for quantiles of two normal populations with a common mean. Commun. Stat.-Theory Methods 2022, 51, 5692–5713. [Google Scholar] [CrossRef]
  19. Marić, N.; Graybill, F.A. Evaluation of a method for setting confidence intervals on the common mean of two normal populations. Commun. Stat.-Simul. Comput. 1979, 8, 53–60. [Google Scholar] [CrossRef]
  20. Pagurova, V.I.; Gurskii, V. A confidence interval for the common mean of several normal distributions. Theory Probab. Appl. 1980, 24, 882–888. [Google Scholar] [CrossRef]
  21. Krishnamoorthy, K.; Moore, B.C. Combining information for prediction in linear regression. Metrika 2002, 56, 73–81. [Google Scholar] [CrossRef]
  22. Bancroft, T.A. On biases in estimation due to the use of preliminary tests of significance. Ann. Math. Stat. 1944, 15, 190–204. [Google Scholar] [CrossRef]
  23. Stein, C. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistic, 26–31 December 1954; University of California Press: Berkeley, CA, USA, 1956; Volume 3, pp. 197–207. [Google Scholar]
  24. Khan, S.; Saleh, A.M.E. On the comparison of the pre-test and shrinkage estimators for the univariate normal mean. Stat. Pap. 2001, 42, 451–473. [Google Scholar] [CrossRef]
  25. Shih, J.H.; Konno, Y.; Chang, Y.T.; Emura, T. A class of general pretest estimators for the univariate normal mean. Commun. Stat.-Theory Methods 2023, 52, 2538–2561. [Google Scholar] [CrossRef]
  26. Taketomi, N.; Konno, Y.; Chang, Y.T.; Emura, T. A meta-analysis for simultaneously estimating individual means with shrinkage, isotonic regression and pretests. Axioms 2021, 10, 267. [Google Scholar] [CrossRef]
  27. Thompson, J.R. Some shrinkage techniques for estimating the mean. J. Am. Stat. Assoc. 1968, 63, 113–122. [Google Scholar] [CrossRef]
  28. Mphekgwana, P.M.; Kifle, Y.G.; Marange, C.S. Shrinkage Testimator for the Common Mean of Several Univariate Normal Populations. Mathematics 2024, 12, 1095. [Google Scholar] [CrossRef]
  29. Pal, N.; Lin, J.J.; Chang, C.H.; Kumar, S. A revisit to the common mean problem: Comparing the maximum likelihood estimator with the Graybill–Deal estimator. Comput. Stat. Data Anal. 2007, 51, 5673–5681. [Google Scholar] [CrossRef]
  30. Khatri, C.; Shah, K. Estimation of location parameters from two linear models under normality. Commun. Stat.-Theory Methods 1974, 3, 647–663. [Google Scholar]
  31. Sinha, B.K. Unbiased estimation of the variance of the Graybill-Deal estimator of the common mean of several normal populations. Can. J. Stat. 1985, 13, 243–247. [Google Scholar] [CrossRef]
  32. Hartung, J. An alternative method for meta-analysis. Biom. J. J. Math. Methods Biosci. 1999, 41, 901–916. [Google Scholar] [CrossRef]
  33. Kifle, Y.G.; Moluh, A.M.; Sinha, B.K. Comparison of local powers of some exact tests for a common normal mean with unequal variances. In Methodology and Applications of Statistics; Springer: Berlin/Heidelberg, Germany, 2021; pp. 77–101. [Google Scholar]
  34. Kifle, Y.G.; Moluh, A.M.; Sinha, B.K. Inference about a Common Mean Vector from Several Independent Multinormal Populations with Unequal and Unknown Dispersion Matrices. Mathematics 2024, 12, 2723. [Google Scholar] [CrossRef]
  35. Cohen, A.; Sackrowitz, H. Exact tests that recover interblock information in balanced incomplete blocks designs. J. Am. Stat. Assoc. 1989, 84, 556–559. [Google Scholar] [CrossRef]
  36. Tippett, L.H.C. The Methods of Statistics: An Introduction Mainly for Workers in the Biological Sciences; Williams & Norgate: London, UK, 1931. [Google Scholar]
  37. Wilkinson, B. A statistical consideration in psychological research. Psychol. Bull. 1951, 48, 156. [Google Scholar] [CrossRef] [PubMed]
  38. Stouffer, S.A.; Suchman, E.A.; DeVinney, L.C.; Star, S.A.; Williams, R.M., Jr. The American Soldier: Adjustment during Army Life. (Studies in Social Psychology in World War ii); Princeton University Press: Princeton, NJ, USA, 1949; Volume 1. [Google Scholar]
  39. Fisher, R. Statistical Methods for Research Workers, 4th ed.; Oliver and Boyd: Edinburgh, Scotland; London, UK, 1932. [Google Scholar]
  40. George, E.O.; Mudholkar, G.S. The Logit Method for Combining Tests; Technical Report; Department of Statistics, Rochester University: Rochester, NY, USA, 1979. [Google Scholar]
  41. Fairweather, W.R. A method of obtaining an exact confidence interval for the common mean of several normal populations. J. R. Stat. Soc. Ser. (Appl. Stat.) 1972, 21, 229–233. [Google Scholar] [CrossRef]
  42. Jordan, S.M.; Krishnamoorthy, K. Exact confidence intervals for the common mean of several normal populations. Biometrics 1996, 52, 77–86. [Google Scholar] [CrossRef]
  43. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  44. Chang, C.H.; Pal, N. Testing on the common mean of several normal distributions. Comput. Stat. Data Anal. 2008, 53, 321–333. [Google Scholar] [CrossRef]
  45. Li, X.; Williamson, P.P. Testing on the common mean of normal distributions using Bayesian methods. J. Stat. Comput. Simul. 2014, 84, 1363–1380. [Google Scholar] [CrossRef]
Figure 1. Efficiency of estimator μ ^ P T based on p-value and modified exact tests with respect to μ ^ G D .
Figure 1. Efficiency of estimator μ ^ P T based on p-value and modified exact tests with respect to μ ^ G D .
Axioms 13 00648 g001
Table 1. Simulated bias (MSE) of the proposed estimator μ ^ P T for different choices of δ ( n < 25 ).
Table 1. Simulated bias (MSE) of the proposed estimator μ ^ P T for different choices of δ ( n < 25 ).
δ μ ( n 1 , n 2 ) TippettWilkinson r = 2 Inverse NormalFisherLogitModified tModified F
0.6−1.0(10, 10)−0.0013−0.0325−0.0010−0.0036−0.00270.00060.0028
   (0.0105)(0.0414)(0.0067)(0.0078)(0.0079)(0.0079)(0.0066)
(15, 15)0.0003−0.0386−0.0014−0.0002−0.00130.00320.0004
   (0.0111)(0.0431)(0.0067)(0.0074)(0.0076)(0.0080)(0.0067)
(20, 20)−0.1368−0.0344−0.0028−0.00180.00120.00170.0011
   (0.0989)(0.0431)(0.0067)(0.0075)(0.0073)(0.0078)(0.0067)
0.0(10, 10)0.00040.0013−0.0003−0.0018−0.0014−0.0018−0.0021
   (0.0003)(0.0003)(0.0063)(0.0003)(0.0003)(0.0005)(0.0058)
(15, 15)0.00010.00070.00010.00150.00000.00030.0021
   (0.0003)(0.0003)(0.0063)(0.0003)(0.0003)(0.0005)(0.0058)
(20, 20)0.02190.00040.0009−0.00010.0001−0.00040.0004
   (0.0330)(0.0003)(0.0063)(0.0003)(0.0003)(0.0006)(0.0058)
1.0(10, 10)−0.00020.0011−0.0012−0.0004−0.00290.00020.0003
   (0.0067)(0.0075)(0.0067)(0.0067)(0.0067)(0.0067)(0.0067)
(15, 15)−0.00280.0019−0.00020.0008−0.00380.0000−0.0003
   (0.0066)(0.0076)(0.0067)(0.0067)(0.0067)(0.0066)(0.0067)
(20, 20)−0.01740.00170.0018−0.0015−0.0001−0.00110.0011
   (0.0416)(0.0085)(0.0067)(0.0067)(0.0067)(0.0067)(0.0066)
1.0−1.0(10, 10)−0.0116−0.0362−0.0001−0.0068−0.0051−0.0103−0.0016
   (0.0429)(0.0540)(0.0110)(0.0195)(0.0186)(0.0330)(0.0114)
(15, 15)−0.0206−0.0346−0.0044−0.0069−0.0002−0.0072−0.0011
   (0.0397)(0.0532)(0.0111)(0.0195)(0.0181)(0.0295)(0.0112)
(20, 20)−0.2090−0.03750.0020−0.0059−0.0008−0.00750.0007
   (0.1394)(0.0536)(0.0111)(0.0209)(0.0187)(0.0296)(0.0111)
0.0(10, 10)0.0016−0.0030−0.0009−0.00070.0007−0.0026−0.0021
   (0.0005)(0.0005)(0.0106)(0.0004)(0.0004)(0.0009)(0.0092)
(15, 15)0.0003−0.00020.00240.0003−0.00090.0035−0.0014
   (0.0005)(0.0005)(0.0105)(0.0004)(0.0004)(0.0009)(0.0092)
(20, 20)0.07460.0026−0.00010.0006−0.00060.0006−0.0010
   (0.1027)(0.0005)(0.0105)(0.0005)(0.0004)(0.0009)(0.0092)
1.0(10, 10)−0.0021−0.00060.00000.00160.00040.0019−0.0031
   (0.0116)(0.0121)(0.0111)(0.0111)(0.0111)(0.0114)(0.0111)
(15, 15)−0.0055−0.0016−0.00100.00270.00040.0034−0.0029
   (0.0111)(0.0121)(0.0111)(0.0116)(0.0111)(0.0114)(0.0112)
(20, 20)−0.08680.00230.0038−0.00190.0004−0.0020−0.0033
   (0.1464)(0.0122)(0.0111)(0.0111)(0.0111)(0.0113)(0.0111)
2.0−1.0(10, 10)−0.0770−0.06430.0012−0.0297−0.0292−0.0798−0.0024
   (0.1472)(0.1301)(0.0222)(0.0805)(0.0738)(0.1763)(0.0289)
(15, 15)−0.0835−0.06240.0021−0.0295−0.0216−0.0727−0.0026
   (0.1475)(0.1311)(0.0221)(0.0712)(0.0763)(0.1786)(0.0271)
(20, 20)−0.2676−0.04610.0008−0.0324−0.0307−0.0726−0.0007
   (0.1764)(0.1277)(0.0221)(0.0720)(0.0730)(0.1830)(0.0273)
0.0(10, 10)−0.00280.00050.00440.00000.0011−0.00450.0003
   (0.0009)(0.0009)(0.0210)(0.0009)(0.0008)(0.0019)(0.0169)
(15, 15)−0.00010.00200.0033−0.0014−0.00120.00320.0038
   (0.0009)(0.0010)(0.0210)(0.0009)(0.0009)(0.0018)(0.0170)
(20, 20)0.18620.0027−0.00680.0002−0.00020.0032−0.0012
   (0.2115)(0.0009)(0.0210)(0.0008)(0.0008)(0.0018)(0.0170)
1.0(10, 10)0.0015−0.0005−0.00270.0038−0.00330.0058−0.0029
   (0.0279)(0.0304)(0.0222)(0.0226)(0.0225)(0.0537)(0.0221)
(15, 15)−0.0035−0.0003−0.00060.00070.00260.0049−0.0003
   (0.0301)(0.0299)(0.0222)(0.0226)(0.0227)(0.0484)(0.0222)
(20, 20)−0.19040.00750.0026−0.0035−0.00530.00960.0000
   (0.2784)(0.0291)(0.0222)(0.0228)(0.0229)(0.0470)(0.0227)
Table 2. Simulated bias (MSE) of the proposed estimator μ ^ P T for different choices of δ ( n > 25 ).
Table 2. Simulated bias (MSE) of the proposed estimator μ ^ P T for different choices of δ ( n > 25 ).
δ μ ( n 1 , n 2 ) TippettWilkinson r = 2 Inverse NormalFisherLogitModified tModified F
0.6−1.0(25, 25)0.0007−0.0015−0.0018−0.0030−0.0005−0.0002−0.0002
   (0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)
(50, 50)−0.00120.0012−0.00090.00030.0006−0.0006−0.0014
   (0.0017)(0.0018)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)
(100, 100)0.00240.00180.00100.0011−0.0016−0.0010−0.0008
   (0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)
0.0(25, 25)−0.0002−0.0001−0.0005−0.00070.00010.00110.0009
   (0.0001)(0.0001)(0.0016)(0.0001)(0.0001)(0.0001)(0.0016)
(50, 50)−0.00040.0008−0.0003−0.00080.0004−0.0001−0.0001
   (0.0001)(0.0001)(0.0016)(0.0001)(0.0001)(0.0001)(0.0016)
(100, 100)0.0001−0.00040.00120.0001−0.0002−0.00010.0015
   (0.0001)(0.0001)(0.0016)(0.0001)(0.0001)(0.0001)(0.0016)
1.0(25, 25)−0.0010−0.00020.00000.00090.00010.0006−0.0013
   (0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)
(50, 50)−0.00120.0004−0.00040.0000−0.0004−0.00030.0019
   (0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)
(100, 100)0.00020.0016−0.00180.0013−0.00080.00150.0007
   (0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)(0.0017)
1.0−1.0(25, 25)0.0001−0.0011−0.00190.00070.00230.00220.0010
   (0.0029)(0.0031)(0.0029)(0.0029)(0.0029)(0.0029)(0.0029)
(50, 50)−0.0010−0.0036−0.0024−0.00050.0007−0.0002−0.0027
   (0.0029)(0.0029)(0.0029)(0.0029)(0.0029)(0.0029)(0.0029)
(100, 100)−0.0014−0.00150.00030.00260.00030.0001−0.0005
   (0.0029)(0.0029)(0.0029)(0.0029)(0.0029)(0.0029)(0.0029)
0.0(25, 25)0.0002−0.00050.0002−0.0001−0.00060.00050.0006
   (0.0001)(0.0001)(0.0027)(0.0001)(0.0001)(0.0002)(0.0027)
(50, 50)0.00020.00070.0013−0.0005−0.00100.0015−0.0050
   (0.0001)(0.0001)(0.0027)(0.0001)(0.0001)(0.0002)(0.0027)
(100, 100)−0.00100.0000−0.00120.00050.00130.0002−0.0013
   (0.0001)(0.0001)(0.0027)(0.0001)(0.0001)(0.0002)(0.0027)
1.0(25, 25)−0.00200.0004−0.00080.00110.0005−0.0012−0.0001
   (0.0029)(0.0029)(0.0029)(0.0029)(0.0029)(0.0029)(0.0029)
(50, 50)0.0001−0.0025−0.0007−0.00010.00090.0003−0.0016
   (0.0028)(0.0029)(0.0029)(0.0029)(0.0028)(0.0029)(0.0029)
(100, 100)0.0003−0.00180.0017−0.0014−0.0006−0.00070.0017
   (0.0029)(0.0029)(0.0029)(0.0028)(0.0028)(0.0029)(0.0029)
2.0−1.0(25, 25)−0.0028−0.0015−0.00020.0018−0.0014−0.00020.0017
   (0.0057)(0.0061)(0.0057)(0.0057)(0.0057)(0.0065)(0.0057)
(50, 50)−0.00290.00220.00060.0006−0.00260.00110.0011
   (0.0057)(0.0059)(0.0057)(0.0057)(0.0057)(0.0064)(0.0057)
(100, 100)0.0007−0.0007−0.00090.00220.0037−0.0027−0.0029
   (0.0057)(0.0060)(0.0057)(0.0057)(0.0057)(0.0064)(0.0057)
0.0(25, 25)0.00100.00120.00160.0009−0.0004−0.00140.0003
   (0.0003)(0.0003)(0.0054)(0.0003)(0.0003)(0.0004)(0.0052)
(50, 50)0.0008−0.0002−0.0017−0.00100.0010−0.0025−0.0019
   (0.0003)(0.0003)(0.0054)(0.0003)(0.0003)(0.0004)(0.0051)
(100, 100)−0.0011−0.00100.00060.00000.0008−0.0001−0.0031
   (0.0003)(0.0003)(0.0054)(0.0003)(0.0003)(0.0004)(0.0051)
1.0(25, 25)−0.00210.0015−0.0017−0.00050.0022−0.00400.0005
   (0.0057)(0.0057)(0.0057)(0.0057)(0.0057)(0.0057)(0.0057)
(50, 50)0.00180.00010.0002−0.00180.0034−0.00120.0006
   (0.0057)(0.0057)(0.0057)(0.0057)(0.0057)(0.0057)(0.0057)
(100, 100)−0.00280.0023−0.00120.00020.0000−0.0024−0.0003
   (0.0057)(0.0057)(0.0057)(0.0057)(0.0057)(0.0057)(0.0057)
Table 3. Simulated bias (MSE) of the proposed estimator μ ^ P T for different choices of δ ( n 1 n 2 ).
Table 3. Simulated bias (MSE) of the proposed estimator μ ^ P T for different choices of δ ( n 1 n 2 ).
δ μ ( n 1 , n 2 ) TippettWilkinson r = 2 Inverse NormalFisherLogitModified tModified F
0.6−1.0(10, 25)0.0018−0.0380−0.0004−0.00070.00050.00160.0043
   (0.0070)(0.0422)(0.0071)(0.0071)(0.0071)(0.01)(0.0135)
(10, 50)0.0003−0.03730.00220.00040.00070.00180.0039
   (0.0070)(0.0410)(0.0071)(0.0070)(0.0071)(0.0100)(0.0133)
(25, 10)−0.0035−0.00160.00090.0057−0.0003−0.00680.0013
   (0.0143)(0.0173)(0.0133)(0.0133)(0.0134)(0.0358)(0.0172)
0.0(10, 25)0.00140.0010−0.00150.00050.00130.00150.0030
   (0.0003)(0.0003)(0.0067)(0.0003)(0.0003)(0.0007)(0.0104)
(10, 50)−0.00050.00100.00030.0003−0.0003−0.0006−0.0069
   (0.0003)(0.0003)(0.0067)(0.0003)(0.0003)(0.0007)(0.0105)
(25, 10)−0.0012−0.00040.00300.00060.0002−0.0004−0.0034
   (0.0005)(0.0005)(0.0127)(0.0005)(0.0005)(0.0012)(0.0132)
1.0(10, 25)0.0010−0.00060.00100.0004−0.00330.0004−0.0001
   (0.0070)(0.0082)(0.0071)(0.0071)(0.0070)(0.0100)(0.0133)
(10, 50)−0.00030.0037−0.00190.00060.00250.00120.0005
   (0.0071)(0.0079)(0.0071)(0.007)(0.0070)(0.0100)(0.0134)
(25, 10)−0.00060.00050.00060.0009−0.0003−0.0020−0.0016
   (0.0133)(0.0134)(0.0133)(0.0133)(0.0133)(0.0172)(0.0171)
1.0−1.0(10, 25)−0.0012−0.04030.0033−0.0007−0.0980−0.0044−0.0078
   (0.0142)(0.0476)(0.0117)(0.0121)(0.0123)(0.0179)(0.0234)
(10, 50)0.0026−0.0343−0.0002−0.0005−0.09640.00130.0039
   (0.0143)(0.0466)(0.0118)(0.0122)(0.0122)(0.0176)(0.0235)
(25, 10)−0.0012−0.0094−0.0032−0.0031−0.0010−0.04990.0015
   (0.0240)(0.0525)(0.0221)(0.0226)(0.0227)(0.1254)(0.0309)
0.0(10, 25)0.0009−0.0006−0.0003−0.00010.00930.0032−0.0017
   (0.0005)(0.0006)(0.0112)(0.0005)(0.0005)(0.0011)(0.0164)
(10, 50)0.0004−0.00050.00030.00090.0038−0.00180.0001
   (0.0005)(0.0005)(0.0112)(0.0005)(0.0005)(0.0011)(0.0162)
(25, 10)0.00090.0014−0.00260.00120.0015−0.00330.0010
   (0.0009)(0.0009)(0.0209)(0.0008)(0.0008)(0.0018)(0.0203)
1.0(10, 25)−0.00300.0003−0.0015−0.0018−0.0033−0.00020.0084
   (0.0117)(0.0122)(0.0117)(0.0118)(0.0117)(0.0167)(0.0222)
(10, 50)−0.00020.0041−0.0018−0.00100.0025−0.00120.0010
   (0.0118)(0.0126)(0.0117)(0.0118)(0.0117)(0.0166)(0.0223)
(25, 10)0.0055−0.0030−0.00110.00350.0046−0.0004−0.0011
   (0.0224)(0.0239)(0.0222)(0.0221)(0.0223)(0.0367)(0.0286)
2.0−1.0(10, 25)−0.0204−0.0374−0.0004−0.0157−0.0092−0.01810.0036
   (0.0641)(0.0706)(0.0235)(0.0375)(0.0346)(0.0797)(0.0705)
(10, 50)−0.0251−0.0396−0.0012−0.0062−0.0130−0.0190−0.0013
   (0.0638)(0.0671)(0.0236)(0.0372)(0.0352)(0.0805)(0.0713)
(25, 10)−0.0008−0.06880.00030.0142−0.0092−0.1897−0.0054
   (0.0477)(0.2115)(0.0444)(0.0467)(0.0476)(0.3748)(0.0889)
0.0(10, 25)0.00600.00110.00190.0007−0.00090.0000−0.0001
   (0.0010)(0.0011)(0.0224)(0.0010)(0.0011)(0.0022)(0.0282)
(10, 50)−0.00160.0016−0.0018−0.00070.0019−0.0035−0.0054
   (0.0010)(0.0010)(0.0225)(0.0011)(0.0010)(0.0024)(0.0285)
(25, 10)0.0001−0.00020.0006−0.00040.00190.00460.0018
   (0.0016)(0.0018)(0.0422)(0.0017)(0.0017)(0.0037)(0.0341)
1.0(10, 25)−0.00310.0060−0.0025−0.0013−0.0059−0.00290.0047
   (0.0238)(0.0243)(0.0234)(0.0235)(0.0235)(0.0349)(0.0489)
(10, 50)0.0012−0.00160.00190.0031−0.0013−0.00200.0045
   (0.0235)(0.0254)(0.0235)(0.0236)(0.0235)(0.0353)(0.0498)
(25, 10)0.00110.00690.00270.0047−0.00130.06420.0008
   (0.0445)(0.0982)(0.0444)(0.0447)(0.0446)(0.2225)(0.0619)
Table 4. Albumin in plasma protein.
Table 4. Albumin in plasma protein.
Experiment n i MeanVariance
A1262.3012.99
B1560.307.84
C759.5033.43
D1661.5018.51
Table 5. The proposed test estimator for albumin in plasma protein with μ 0 = 59.50 .
Table 5. The proposed test estimator for albumin in plasma protein with μ 0 = 59.50 .
μ ^ GD μ ^ PT T μ ^ PT W ( r = 2 ) μ ^ PT W ( r = 3 ) μ ^ PT W ( r = 4 ) μ ^ PT IN μ ^ PT F μ ^ PT L μ ^ PT Mt μ ^ PT MF
60.9959.560.9959.5059.5059.5059.5060.9960.9960.99
T: Tippett’s test, W: Wilkinson’s test, I N : Inverse normal test, F: Fisher’s inverse χ 2 -test. L: The logit test, M t : Modified t test, M F : Modified F test.
Table 6. Selenium in non-fat milk powder.
Table 6. Selenium in non-fat milk powder.
Methods n i MeanVariance
Atomic absorption spectrometry8105.0085.71
Neutron activation:
(1.) Instrumental12109.7520.75
(2.) Radiochemical14109.502.73
Isotope dilution mass spectrometry8113.2533.64
Table 7. The proposed test estimator for selenium in non-fat milk powder for various values of μ 0 .
Table 7. The proposed test estimator for selenium in non-fat milk powder for various values of μ 0 .
μ 0 μ ^ GD μ ^ PT T μ ^ PT W ( r = 2 ) μ ^ PT W ( r = 3 ) μ ^ PT W ( r = 4 ) μ ^ PT IN μ ^ PT F μ ^ PT L μ ^ PT Mt μ ^ PT MF
90.00109.60109.60109.60109.60109.60109.60109.60109.60109.60109.60
100.00109.60109.60109.60109.60109.60109.60109.60109.60109.60109.60
110.00109.60110.00110.00110.00110.00109.60110.00110.00109.60109.60
110.50109.60110.50110.50110.50110.50109.60110.50110.50109.60109.60
111.00109.60109.60111.00111.00109.60109.60109.60109.60109.60111.00
T: Tippett’s test, W: Wilkinson’s test, I N : Inverse normal test, F: Fisher’s inverse χ 2 -test. L: The logit test, M t : Modified t test, M F : Modified F test.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mphekgwana, P.M.; Kifle, Y.G.; Marange, C.S. Pretest Estimation for the Common Mean of Several Normal Distributions: In Meta-Analysis Context. Axioms 2024, 13, 648. https://doi.org/10.3390/axioms13090648

AMA Style

Mphekgwana PM, Kifle YG, Marange CS. Pretest Estimation for the Common Mean of Several Normal Distributions: In Meta-Analysis Context. Axioms. 2024; 13(9):648. https://doi.org/10.3390/axioms13090648

Chicago/Turabian Style

Mphekgwana, Peter M., Yehenew G. Kifle, and Chioneso S. Marange. 2024. "Pretest Estimation for the Common Mean of Several Normal Distributions: In Meta-Analysis Context" Axioms 13, no. 9: 648. https://doi.org/10.3390/axioms13090648

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop