Next Article in Journal
Introduction to Completely Geometrically Integrable Maps in High Dimensions
Next Article in Special Issue
Construction of Column-Orthogonal Designs with Two-Dimensional Stratifications
Previous Article in Journal
Dynamics of Classical Solutions of a Two-Stage Structured Population Model with Nonlocal Dispersal
Previous Article in Special Issue
A New One-Parameter Distribution for Right Censored Bayesian and Non-Bayesian Distributional Validation under Various Estimation Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Approaches on Parameter Estimation of the Gamma Distribution

1
College of Big Data and Internet, Shenzhen Technology University, Shenzhen 518118, China
2
Faculty of Science and Technology, BNU-HKBU United International College, Zhuhai 519087, China
3
Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, BNU-HKBU United International College, Zhuhai 519087, China
4
Department of Mathematics, Hong Kong Baptist University, Hong Kong, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(4), 927; https://doi.org/10.3390/math11040927
Submission received: 27 January 2023 / Revised: 5 February 2023 / Accepted: 9 February 2023 / Published: 12 February 2023
(This article belongs to the Special Issue Distribution Theory and Application)

Abstract

:
This paper discusses new approaches to parameter estimation of gamma distribution based on representative points. In the first part, the existence and uniqueness of gamma mean squared error representative points (MSE-RPs) are discussed theoretically. In the second part, by comparing three types of representative points, we show that gamma MSE-RPs perform well in parameter estimation and simulation. The last part proposes a new Harrel–Davis sample standardization technique. Simulation studies reveal that the standardized samples can be used to improve estimation performance or generate MSE-RPs. In addition, a real data analysis illustrates that the proposed technique yields efficient estimates for gamma parameters.

1. Introduction

The term representative points (RPs) indicates a set of supporting points with corresponding probabilities, which can be used as the best approximation of a d-dimensional probability distribution. Representative points can be regarded as a discretization of a continuous distribution, and are expected to retain as much information as possible. In the univariate case, X is considered to be a population random variable with cumulative distribution function (cdf) F ( x ) , a discrete random variable Z is defined to approximate X with probability mass function (pmf) by a set of supporting points z = { z 1 , z 2 , , z k } ( < z 1 < z 2 < < z k < ) with probabilities { p 1 , p 2 , , p k } , where P ( Z = z i ) = p i and i = 1 k p i = 1 . In the literature, there are several approaches to choosing the supporting point set z. For example, a set of random samples from F ( x ) can be viewed as a representative of the distribution; Fang and Wang [1] suggest generating representative points based on the number theoretic method. In 1957, Cox [2] proposes the idea of using mean squared error (MSE) to measure the loss of information from F ( x ) , where
M S E ( z ) = M S E ( z 1 , z 2 , , z k ) = E ( min i = 1 , , k ( z i X ) 2 ) = min i = 1 , , k ( z i X ) 2 d F ( x ) .
The point set z M S E = { z 1 M S E , , z k M S E } such that M S E ( z ) arrives its minimum is called the mean squared error representative points (MSE-RPs) of F ( x ) . MSE-RPs are found to have many good properties and have been applied in study fields such as signal compression (Gersho and Gray [3]), numerical integration computation (Pagès [4,5]), simulating stochastic differential equation (Gobet et al. [6]; El Amri et al. [7]), statistical simulation (Fang et al. [8], Fang et al. [9]) and clothing standard settings (Fang and He [10]; Flury [11]). To compute MSE-RPs for different distributions, effective numerical methods are proposed. Fang-He algorithm (Fang and He [10]) calculates MSE-RPs by solving a system of non-linear equations; Lloyd I algorithm (Lloyd [12]), LBG algorithm (Linde et al. [13]) and Competitive Learning Vector Quantization algorithm (Pagès [5]) obtain MSE-RPs by iterating a long training sequence of data; Tarpey’s self-consistency algorithm (Tarpey [14]) brings the idea of k-means algorithm for generating MSE-RPs; Chakraborty et al. [15] provides an accelerate algorithm using Newton’s method. When the number of MSE-RPs (k) is large, obtaining MSE-RPs becomes computationally intensive. Fang and He [10] presents some discussion on the optimum choice of k.
Recently, the use MSE-RPs properties for some distributions have been studied in detail, including normal distribution (Fang et al. [8]), mixed normal distribution (Fang et al. [9] and Li et al. [16]), arcsine distribution (Jiang et al. [17]) and exponential distribution (Xu et al. [18]). A general relationship between MSE-RPs and population distribution can be found in the work of Fei [19] and Fang et al. [9]. The study of the gamma distribution’s MSE-RPs (gamma MSE-RPs) can be traced back to Fu [20], which discusses the existence of gamma MSE-RPs and establishes an algorithm for computing these points. The gamma distribution is one of the most important distributions in statistics and probability theory, it is worth taking a closer look at gamma MSE-RPs and discovering their merits. The innovations of this paper are listed as follows:
  • New theoretical results prove the uniqueness of gamma MSE-RPs;
  • Gamma MSE-RPs are found to outperform other types of representative points in parameter estimation;
  • A new standardization technique is proposed to improve the estimation performance of random samples from the gamma distribution.
Our discussion will focus on these three perspectives. Section 2 provides some preliminary knowledge of the gamma distribution and different types of representative points for readers to access our content easily. Section 3 gives some theoretical discussion on the existence and uniqueness of gamma MSE-RPs. An algorithm for generating gamma MSE-RPs is recommended. Section 4 compares the performance of three typical gamma representative points in parameter estimation and simulation. The results demonstrate that gamma MSE-RPs take advantage of other representative points in many scenarios. Section 5 introduces a new Harrel–Davis standardization technique. Simulation studies show that the standardized samples have better performances than random samples in estimation and can be used to generate gamma MSE-RPs. Section 6 provides a real clinical data analysis and illustrates that the standardization technique yields efficient estimates for gamma parameters.

2. Preliminaries

2.1. The Gamma Distribution and Gamma MSE-RPs

A gamma-distributed random variable with shape parameter a and rate parameter b is denoted X G a m m a ( a , b ) G a ( a , b ) . The corresponding probability density function (pdf) in the shape-rate parametrization is
f ( x ; a , b ) = b a Γ ( a ) x a 1 e b x , for x > 0 , a , b > 0 ,
where Γ ( · ) is the gamma function. The mean, variance, skewness and kurtosis of X are
μ = a b , σ 2 = a b 2 , Sk ( X ) = 2 a and Ku ( X ) = 6 a
accordingly. Let z M S E = { z 1 M S E , z 2 M S E , , z k M S E } be a set of MSE-RPs for G a ( a , b ) , derive the following intervals
I 1 = 0 , z 1 M S E + z 2 M S E 2 , I 2 = z 1 M S E + z 2 M S E 2 , z 2 M S E + z 3 M S E 2 , , I k 1 = z k 2 M S E + z k 1 M S E 2 , z k 1 M S E + z k M S E 2 , I k = z k 1 M S E + z k M S E 2 , +
with the corresponding probabilities in these intervals as
p i = I i f ( x ; a , b ) d x , i = 1 , , k .
Here f ( x ; a , b ) is the pdf in (2).

2.2. Other Types of Representative Points

In addition to MSE-RPs, two other types of representative points are frequently discussed in the literature: Monte Carlo representative points and number theoretic representative points.
(A)
Monte Carlo representative points (MC-RPs)
MC-RPs are generated by the Monte Carlo method. Consider a random sample { x 1 , x 2 , , x k } from the distribution function F ( x ) ; this can be treated as a set of MC-RPs, written as z M C = { z 1 M C , z 2 M C , , z k M C } , where z i M C = x i and p ( z i M C ) = 1 / k , i = 1 , , k .
(B)
Number theoretic representative points (NT-RPs)
NT-RPs are determined from the number theoretic method (Fang and Wang [1]). Given an one dimensional interval ( 0 , 1 ) , it is known that point set 2 i 1 2 k ( i = 1 , , k ) is uniformly scattered on this interval. Based on the inverse transformation method, points
z i N T = F 1 2 i 1 2 k , i = 1 , , k
are k NT-RPs of F ( x ) . The supporting point set is z N T = { z 1 N T , z 2 N T z k N T } with probability p ( z i N T ) = 1 / k , i = 1 , , k .

2.3. Harrel-Davis Quantile Estimator

In Harrel and Davis [21], a distribution-free quantile estimator is proposed, which consists of a linear combination of the order statistics admitting a jackknife variance. Let X 1 , X 2 , , X n denote a random sample of size n from G a ( a , b ) ; the pth quantile estimator is
Q p = 1 β { ( n + 1 ) p , ( n + 1 ) ( 1 p ) } 0 1 F n 1 ( y ) y ( n + 1 ) p 1 ( 1 y ) ( n + 1 ) ( 1 p ) 1 d y ,
where F n ( x ) is the empirical distribution function. That is, F n ( x ) = n 1 I ( X i x ) , and I ( · ) is the indicator function of the set A. This method can be used for sample standardization. More details are discussed in Section 5.

3. The Existence and Uniqueness of Gamma MSE-RPs

Let a random variable X G a ( a , b ) with E ( X ) = μ and z = { z 1 , z 2 , z k } ( 0 < z 1 < z 2 < < z k < ) is the supporting points set of X, to minimize M S E ( z ) , by taking partial derivative of (1), we have
0 1 2 ( z 1 + z 2 ) ( z 1 x ) f ( x ) d x = 0 1 2 ( z 1 + z 2 ) 1 2 ( z 2 + z 3 ) ( z 2 x ) f ( x ) d x = 0 1 2 ( z k 1 + z k ) ( z k x ) f ( x ) d x = 0 .
where f ( x ) is the pdf of the gamma distribution (2). When k = 1 , system of Equation (7) has only one equation
0 ( z 1 x ) f ( x ) d x = 0 .
Obviously, it has one solution z 1 = a b = μ , which is the only representative point. When k 2 , the existence of MSE-RPs is true if the system of Equation (7) has a solution. After several transformations, (7) becomes
( z 1 μ ) [ F ( z 1 + z 2 2 ) F ( 0 ) ] = 1 2 b ( z 1 + z 2 ) f ( z 1 + z 2 2 ) ( z 2 μ ) [ F ( z 2 + z 3 2 ) F ( z 1 + z 2 2 ) ] = 1 2 b ( z 1 + z 2 ) f ( z 1 + z 2 2 ) 1 2 b ( z 2 + z 3 ) f ( z 2 + z 3 2 ) ( z k μ ) [ 1 F ( z k 1 + z k 2 ) ] = 1 2 b ( z k 1 + z k ) f ( z k 1 + z k 2 ) ,
where F ( x ) is the cdf. Theorem 1 shows that the system of Equation (8) has a solution:
Theorem 1.
  • For given z 1 > 0 , equation
    ( z 1 μ ) F z 1 + z 2 2 = 1 2 b ( z 1 + z 2 ) f z 1 + z 2 2
    a solution z 2 exists if and only if z 1 < μ .
  • For given z i > z i 1 > 0 , i = 2 , , k 1 , Equation
    ( z i μ ) F z i + z i + 1 2 F z i 1 + z i 2 = 1 2 b ( z i 1 + z i ) f z i 1 + z i 2 1 2 b ( z i + z i + 1 ) f z i + z i + 1 2 ,
    exists a solution z i + 1 when z i 1 < z i , i 1 , where z i , i 1 is the i 1 th representative point in the set of gamma MSE-RPs, which has k = i .
  • For a given z k 1 > 0 , Equation
    ( z k μ ) 1 F z k 1 + z k 2 = 1 2 b ( z k 1 + z k ) f z k 1 + z k 2
    a solution z k exists.
Theorem 1 guarantees the existence of gamma MSE-RPs. Its proof is provided in Appendix A. For the special case k = 2 , the existence can be provided by statements 1 and 3 in Theorem 1. Next, we show the uniqueness of gamma MSE-RPs in Theorem 2.
Theorem 2.
Suppose X G a ( a , b ) . For any k N + , the set of gamma MSE-RPs is unique if a 1 .
The proof of Theorem 2 is provided in Appendix A. As a result, these two theorems guarantee the existence and uniqueness of gamma MSE-RPs. Furthermore, throughout this paper, gamma MSE-RPs are generated based on the self-consistency algorithm [22]. The details of this algorithm are provided in Appendix B.

4. Gamma MSE-RPs in Parameter Estimation and Simulation

This section compares the performances of gamma MSE-RPs with other types of representative points, i.e., NT-RPs and MC-RPs, in terms of parameter estimation and simulation. Recall that random variable X G a ( a , b ) and Z is a discrete approximation of X. The mean, variance, skewness and kurtosis of Z are
E ( Z ) = i = 1 k z i p i μ z , Var ( Z ) = i = 1 k ( z i μ z ) 2 p i σ z 2 , Sk ( Z ) = 1 σ z 3 i = 1 k ( z i μ z ) 3 p i , Ku ( Z ) = 1 σ z 4 i = 1 k ( z i σ z ) 4 p i 3 .
By the method of moments, we have
a ^ m 2 = μ z 2 σ z 2 a n d b ^ m 2 = μ z σ z 2 ,
which are the point estimators of a and b in G a ( a , b ) . As Z is a discrete approximation of X, it is expected that the moments of Z and estimates in (12) are close to the moments of X, a and b accordingly. The following theorem shows some connections between gamma MSE-RPs and the corresponding G a ( a , b ) .
Theorem 3.
Let X G a ( a , b ) with V a r ( X ) = σ 2 < , z = { z 1 , z 2 , , z k } is a set of gamma MSE-RPs of G a ( a , b ) with corresponding probabilities in (4); then,
E ( Z ) = E ( X ) a n d lim k V a r ( Z ) = V a r ( X ) .
The proof of Theorem 3 is provided in Appendix A. Note that Theorem 3 is established not only for the gamma distribution but also for all continuous population distribution. Next, moments and estimates in (12) are calculated from MSE-RPs, NT-RPs, and MC-RPs of different G a ( a , b ) . Three typical shapes of gamma distributions ( G a ( 1 , 0.5 ) —monotone decreasing; G a ( 2 , 0.5 ) —right skewed and G a ( 7.5 , 1 ) —bell-shaped; their pdfs are plotted in Figure 1). These are chosen and the representative points are set to three sizes ( k = 5 , 20 , 100 ). The first part of Table 1, Table 2 and Table 3 summarizes the results in different scenarios. The last line of each table presents the moments and parameters of G a ( a , b ) . It is clear that if k is fixed, the moments and estimates of MSE-RPs are closer to the true values than other representative points. Moreover, we can observe that the means of MSE-RPs are almost equal to the means of G a ( a , b ) in all scenarios; when k becomes large, the moments and estimates of MSE-RPs converge to the true values much faster than other representative points. These results are consistent with the description in Theorem 3.
Next, the comparison focuses on the estimating performance of samples from representative points. We take samples from different shapes of gamma distributions ( G a ( 1 , 0.5 ) , G a ( 2 , 0.5 ) and G a ( 7.5 , 1 ) ), as well as their representative points with different sizes ( k = 5 , 20 , 100 ). Setting sample size N = 1000 and repeat sampling M = 10 , 000 times for each scenario, the method of moment estimates ( a ^ m 2 and b ^ m 2 ) and maximum likelihood estimates ( a ^ m l e and b ^ m l e ) are calculated. Define
PD ¯ a ^ = i = 1 M | a ^ i a | a / M a n d PD ¯ b ^ = i = 1 M | b ^ i b | b / M
as the average proportional deviation between estimations and parameters. The second part of Table 1, Table 2 and Table 3 show that MSE-RPs samples have the smallest average proportional deviation in most of the selected scenarios. Table A1 and Table A2 in Appendix C give medians and 95% empirical confidence intervals of a ^ m 2 , b ^ m 2 , a ^ m l e and b ^ m l e . In this simulation study, we observe that the point estimates of a and b from MSE-RPs samples generally have good estimation accuracy with both the moment and maximum likelihood methods. Meanwhile, when k is large, the estimation performances of MSE-RPs samples are similar to those samples from the corresponding G a ( a , b ) . It is also worth mentioning that when k = 5 , the proportional deviation PD ¯ a ^ m 2 and PD ¯ b ^ m 2 are much smaller than PD ¯ a ^ m l e and PD ¯ b ^ m l e . That is, when the size of gamma MSE-RPs is small, it is better to estimate parameters using the method of moments.

5. Generating MSE-RPs from Harrel–Davis Standardized Samples

This section discusses how to generate MSE-RPs from a gamma-distributed sample. A commonly used approach has two steps as follows:
  • Calculate the maximum likelihood estimates (MLEs) for a and b, namely a ^ and b ^ , based on the sample dataset;
  • Generate MSE-RPs from the gamma distribution with the estimated parameters, i.e., G a ( a ^ , b ^ ) .
As we know, the representativeness of MSE-RPs depends on the estimate of gamma parameters. More accurate estimates will produce better representativeness. However, if a random sample does not represent the population well, the estimates may show large deviations from the true parameters. Hence, the MSE-RPs that are generated are not good representatives of the population distribution. This usually occurs when the sample size is small or medium. Next, we introduce a new Harrel–Davis (HD) standardization technique that can reduce the effect of randomness from samples. This technique transfers a random sample to a set of HD quantile estimators and then treats these estimators as a new “sample”. Recall that a set of quantiles with equal probability is a set of NT-RPs for population; a similar idea is utilized for sample standardization.
Definition 1
(HD standardized sample). Let x = { x 1 , x 2 , , x n } be a set of sample data from a gamma distribution; set x = { Q p 1 , Q p 2 , , Q p n } , which is called the HD standardized sample of x, where Q p i is the p i th HD quantile estimator defined in (6), p i = 2 i 1 2 n and P ( Q p i ) = 1 / n ( i = 1 , 2 , , n ).
Note here that x is not a random sample because Q p 1 , Q p 2 , , Q p n are not independent. However, since quantile estimators are equiprobable ( P ( Q p i ) = 1 / n ), set x is treated as an arbitrarily selected sample, which can be used to calculate MLEs for a and b. A new approach to generate MSE-RPs is proposed as follows:
  • Obtain the HD standardized sample;
  • Calculate the MLEs for a and b, namely a ^ and b ^ , based on the HD standardized sample;
  • Generate MSE-RPs from G a ( a ^ , b ^ ) .
Next, a simulation study is provided to show the good performance of HD standard samples in parameter estimation. Consider three gamma distributions ( G a ( 1 , 0.5 ) , G a ( 2 , 0.5 ) and G a ( 7.5 , 1 ) ) and three different sample sizes ( n = 50 , 200 , 500 ), in each scenario, a number of N = 10 , 000 random samples are generated and their HD standardized samples are obtained. The MLEs are calculated for each sample/standardized sample and summarized in Table 4. This shows that the means of estimates from HD standardized samples are closer to the true value in most scenarios. Moreover, the estimates from HD standardized samples appear to have smaller standard deviations than those from random samples. We conclude that HD standardized samples outperform random samples in terms of estimation accuracy and stability based on these results. Therefore, it is recommended to use the new three-step approach to generate MSE-RPs. Here, a comparison study between the MSE-RPs generated by random samples and HD-samples is provided. The estimates ( a ^ and b ^ ) in Table 4 are used to generate gamma MSE-RPs. Table 5 summarizes the results when n = 200 with the size of MSE-RPs k = 20 . It shows that the moments of gamma MSE-RPs from HD-samples are close to the moments of the origin G a ( a , b ) . Meanwhile, the method of moment estimates in (12) are obtained. The estimates from HD samples have a better accuracy than those from random samples. This conclusion is generally valid when n = 50 a n d 500 .
It is noteworthy that the HD standardization technique can also be applied in resampling. Consider another simulation study with the same settings as Table 4. We resample from each sample/standardized sample using n r = n and calculate the MLEs. The means and standard deviations of the resampled MLEs are summarized in Table 6. This shows that estimates from standardized samples generally have a better accuracy and smaller standard deviations when resampling.

6. Real Data Illustration

In this section, we consider a real-world dataset and illustrate the HD standardized technique proposed in the previous section. In this clinical study, 97 Swiss females ( n = 97 ) aged 70–74 inclusive at the time of diagnosis of dementia (a form of mental disorder) were studied for survival times (in years) by Elandt–Johnson and Johnson [23]. These data were analyzed by Ozonur and Paul [24] using the likelihood ratio test and score test with p-values 0.233 and 0.140, which are greater than 0.05. Both tests suggest that the two-parameter gamma distribution adequately fits the dementia data.
Point estimates (MLE) and the bootstrap interval estimates [25] based on the origin sample data and the corresponding HD sample are calculated. The approximate ( 1 α ) bootstrap percentile interval is defined as
θ ^ lower , θ ^ upper = θ ^ M * α 2 , θ ^ M * 1 α 2 .
In practice, we resample the original data M = 1000 times to obtain 1000 replications of the parameter estimate θ ^ * (i.e., a ^ and b ^ for the gamma distribution) with α = 0.05 . These estimates are sorted and the 25th value is used as the lower bound; the 975th value is the upper bound. The MLEs based on the HD standardized sample are a ^ H D = 1.4602 and b ^ H D = 0.2886 with confidence intervals ( 1.3846 , 1.8073 ) and ( 0.2637 , 0.3839 ) . The lengths of confidence intervals are shorter than those based on the origin sample data, where a ^ o r i g i n = 1.4602 and b ^ o r i g i n = 0.2886 with confidence intervals ( 1.3777 , 1.8632 ) and ( 0.2659 , 0.3914 ) .

7. Concluding Remarks

In the first part of this paper, the existence and uniqueness of gamma MSE-RPs are proved using two different approaches. An effective algorithm is recommended for the generation of gamma MSE-RPs. The second part of this paper compares gamma MSE-RPs with other representative points in terms of parameter estimation and simulation. This shows that the moments and estimates based on gamma MSE-RPs are the closest to the true values in different scenarios. In addition, samples from gamma MSE-RPs show a good general estimation accuracy. The last part of this paper introduces the new HD standardization technique. When a gamma-distributed sample is at hand, we recommend first transferring it to the HD standardized sample and then using it to estimate gamma parameters or generate MSE-RPs.
In future work, we would like to study whether the MSE-RPs of other distributions can also perform well in parameter estimation. It would also be interesting to explain how HD standardization technique reduces the randomness from samples through a theoretical demonstration.

Author Contributions

Conceptualization, X.K.; Methodology, S.W.; Validation, M.Z.; Supervision, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, BNU-HKBU United International College (UIC), project code 2022B1212010006 and in part by Guangdong Higher Education Upgrading Plan (2021–2025) R0400001-22.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the Editor, Associate Editor and referees for their constructive comments leading to significant improvement of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs of Theorems

Proof of Theorem 1.
The proof of three points in this theorem are provided as follows. Without loss of generality, consider a gamma distribution with b = 1 (i.e., μ = a ) in all proofs.
Proof of point 1. Let
G ( z 1 , z 2 ) = ( z 1 a ) F ( z 1 + z 2 2 ) + 1 2 ( z 1 + z 2 ) f ( z 1 + z 2 2 ) ,
there is
G ( z 1 , z 1 ) = ( z 1 a ) F ( z 1 ) + z 1 f ( z 1 ) .
Because lim z 1 0 G ( z 1 , z 1 ) = 0 and
d G ( z 1 , z 1 ) d z 1 G z 1 ( z 1 , z 1 ) = F ( z 1 ) + ( z 1 a ) f ( z 1 ) + f ( z 1 ) + ( a 1 ) f ( z 1 ) z 1 f ( z 1 ) = F ( z 1 ) > 0 ,
Hence G ( z 1 , z 1 ) > 0 . In addition,
G ( z 1 , ) = lim z 2 G ( z 1 , z 2 ) = z 1 a ,
we have
G ( z 1 , ) < 0 z 1 < a .
Combine (A1) with the condition G ( z 1 , z 2 ) is continuous for z 2 [ z 1 , ) and G ( z 1 , z 1 ) > 0 ; point 1 of Theorem 1 is proved.
Proof of point 3.
Let
G ( z k 1 , z k ) = ( z k a ) [ 1 F ( z k 1 + z k 2 ) ] 1 2 ( z k 1 + z k ) f ( z k 1 + z k 2 ) ,
therefore
G ( z k 1 , z k 1 ) = ( z k 1 a ) ( 1 F ( z k 1 ) ) z k 1 f ( z k 1 ) .
Since G ( 0 , 0 ) = a < 0 , lim z k 1 G ( z k 1 , z k 1 ) = 0 and
G z k 1 ( z k 1 , z k 1 ) = 1 F ( z k 1 ) ( z k 1 a ) f ( z k 1 ) f ( z k 1 ) ( a 1 ) f ( z k 1 ) + z k 1 f ( z k 1 ) = 1 F ( z k 1 ) > 0 ,
we have
G ( z k 1 , z k 1 ) < 0 ( z k 1 > 0 ) .
Next, we show that for z k [ z k 1 , ) , G ( z k 1 , z k ) is firstly monotone-increasing and then monotone-decreasing. Derive G ( z k 1 , z k ) by z k to obtain
G z k ( z k 1 , z k ) = ( 1 F ( z k 1 + z k 2 ) ) 1 2 ( z k a ) f ( z k 1 + z k 2 ) 1 2 f ( z k 1 + z k 2 ) 1 2 ( a 1 ) f ( z k 1 + z k 2 ) + 1 4 ( z k 1 + z k ) f ( z k 1 + z k 2 ) = 1 F ( z k 1 + z k 2 ) 1 4 ( z k z k 1 ) f ( z k 1 + z k 2 ) .
Let
H ( z k 1 , z k ) = G z k ( z k 1 , z k ) , z = 1 2 ( z k 1 + z k ) ,
we have
H z k ( z k 1 , z k ) = 1 2 f ( z k 1 + z k 2 ) 1 4 f ( z k 1 + z k 2 ) 1 4 ( z k z k 1 ) [ 2 z k 1 + z k · a 1 2 1 2 ] f ( z k 1 + z k 2 ) = [ 3 4 + 1 8 ( z k z k 1 ) ( a 1 ) ( z k z k 1 ) 4 ( z k 1 + z k ) ] f ( z k 1 + z k 2 ) = [ z 2 ( a + 2 + z k 1 ) z + ( a 1 ) z k 1 ] f ( z ) / z .
Note that in Equation (A4), z 2 > 0 , as long as
H ( z k 1 , z k 1 ) = 1 F ( z k 1 ) > 0 , a n d lim z k H ( z k 1 , z k ) = 0 ,
C 0 > z k 1 must exist that satisfies
when z k < C 0 , H z k ( z k 1 , z k ) < 0 ; when z k > C 0 , H z k ( z k 1 , z k ) > 0 .
Therefore, C * ( z k 1 < C * < C 0 ) satisfies
when z k < C * , G z k ( z k 1 , z k ) = H ( z k 1 , z k ) < 0 ; when z k > C * , G z k ( z k 1 , z k ) = H ( z k 1 , z k ) > 0 ,
which means that G ( z k 1 , z k ) is firstly monotone-increasing and then monotone-decreasing. In addition, we have
lim z k G ( z k 1 , z k ) = 0
and G ( z k 1 , z k 1 ) < 0 . Thus, the function G ( z k 1 , z k ) must cross the x-axis and the solution z k exists. One more step:
d z k d z k 1 = G z k 1 ( z k 1 , z k ) / G z k ( z k 1 , z k )
and G z k ( z k 1 , z k ) > 0 in the neighborhood domain; furthermore,
G z k 1 ( z k 1 , z i ) = 1 4 ( z k z k 1 ) f ( z k 1 + z k 2 ) < 0 ,
We find that z k is a monotone-increasing function of z k 1 .
Proof of point 2. Proving point 2 is complicated. Here, we provide the prove of a special case when X G a ( 1 , 1 ) . Let
G ( z i 1 , z i , z i + 1 ) = ( z i a ) [ F ( z i + z i + 1 2 ) F ( z i 1 + z i 2 ) ] + 1 2 ( z i + z i + 1 ) f ( z i + z i + 1 2 ) 1 2 ( z i 1 + z i ) f ( z i 1 + z i 2 ) ,
thus:
G ( z i 1 , z i , z i ) = ( z i a ) [ F ( z i ) F ( z i 1 + z i 2 ) ] + z i f ( z i ) z i 1 + z i 2 f ( z i 1 + z i 2 ) .
Deriving G ( z i 1 , z i , z i ) by z i , we have
G z i ( z i 1 , z i , z i ) = F ( z i ) F ( z i 1 + z i 2 ) z i z i 1 4 f ( z i 1 + z i 2 ) .
Let
H ( z i 1 , z i ) = G z i ( z i 1 , z i , z i ) ,
we have
H z i ( z i 1 , z i ) = f ( z i ) + [ 3 4 + 1 8 ( z i z i 1 ) ( a 1 ) z i z i 1 4 ( z i 1 + z i ) ] f ( z i 1 + z i 2 ) .
For X G a ( 1 , 1 ) , (A7) can be simplified to
H z i ( z i 1 , z i ) = f ( z i ) + [ 3 4 + 1 8 ( z i z i 1 ) ] f ( z i 1 + z i 2 ) ,
set x = z i z i 1 2 , we have
f ( z i ) + [ 3 4 + 1 8 ( z i z i 1 ) ] f ( z i 1 + z i 2 ) = 0 e z i e z i 1 + z i 2 = 3 4 1 8 ( z i z i 1 ) e z i 1 z i 2 = 3 4 1 4 z i 1 z i 2 e x = 3 4 1 4 x .
Therefore, H z i ( z i 1 , z i ) crosses the x-axis twice for z i > z i 1 .
As H z i ( z i 1 , z i 1 ) = 1 4 f ( z i 1 ) > 0 and lim z i H ( z i 1 , z i ) z i = 0 , combined with the facts H ( z i 1 , z i 1 ) = 0 and lim z i ( H ( z i 1 , z i ) ) = 0 , we know that, for z i [ z i 1 , ) , G ( z i 1 , z i , z i ) is first monotone-increasing and then monotone-decreasing. In addition, G ( z i 1 , z i 1 , z i 1 ) = 0 and lim z i G ( z i 1 , z i , z i ) = 0 ; we conclude G ( z i 1 , z i , z i ) > 0 when z i 1 > 0 . Next, consider
G z i + 1 ( z i 1 , z i , z i + 1 ) = z i a 2 f ( z i + z i + 1 2 ) + 1 2 f ( z i + z i + 1 2 ) + a 1 2 f ( z i + z i + 1 2 ) 1 4 ( z i + z i + 1 ) f ( z i + z i + 1 2 ) = z i z i + 1 4 f ( z i + z i + 1 2 ) < 0 ,
therefore, the solution z i + 1 exists if
G ( z i 1 , z i , ) < 0 .
We find that
G ( z i 1 , z i , ) = ( z i a ) [ 1 F ( z i 1 + z i 2 ) ] z i 1 + z i 2 f ( z i 1 + z i 2 )
is exactly (A2). From the analysis in the proof of point 3, we conclude that the solution z i + 1 exists when z i 1 < z i , i 1 . □
Proof of Theorem 2.
Let a random variable X G a ( a , b ) with the pdf
g ( x ; a , b ) = b a Γ ( a ) x a 1 e b x , for x > 0 , and a , b > 0 .
If X has a log-concave density function, there exists a unique set of MSE-RPs (Trushkin [26]). Function g ( x ; a , b ) is log-concave if
( ln g ( x ; a , b ) ) 0 .
Based on (A9), we have
ln g ( x ; a , b ) = ( a 1 ) ln x + a ln b b x ln Γ ( a ) , ( ln g ( x ; a , b ) ) = ( 1 a ) x 1 + b , ( ln g ( x ; a , b ) ) = ( a 1 ) x 2 .
The inequality (A10) holds when a 1 . □
Proof of Theorem 3.
By taking partial diffrenciation of (7), we have
z 1 0 1 2 ( z 1 + z 2 ) f ( x ) d x = 0 1 2 ( z 1 + z 2 ) x f ( x ) d x z 2 1 2 ( z 1 + z 2 ) 1 2 ( z 2 + z 3 ) f ( x ) d x = 1 2 ( z 1 + z 2 ) 1 2 ( z 2 + z 3 ) x f ( x ) d x z k 1 2 ( z k 1 + z k ) f ( x ) d x = 1 2 ( z k 1 + z k ) x f ( x ) d x .
Summing up the LHS and RHS of (A11), E ( X ) = E ( Z ) is obtained. The first part of this theorem is proved. Next, from Theorem 3 in Fei [19], we have
lim k M S E ( z 1 , z 2 , , z k ) = 0 .
Theorem 5 of Fei [19] shows that
V a r ( Z ) = ( 1 M S E ( z 1 , z 2 , , z k ) ) V a r ( X ) .
Combining (A12) and (A13), the second part of this theorem is proved. □

Appendix B. Self-Consistency Algorithm for Generating Gamma MSE-RPs

The self-consistency algorithm [22] has the following steps:
1. Let the z 0 = { z 1 N T , z 2 N T z k N T } be the initial set.
2. Compute the conditional expectation z 1 = E X z 0 using the system of equation,
z i = I i x d F ( x ) I i d F ( x ) , i = 1 , 2 , , k
and compare the distance between z 0 and z 1 for each z i . If the minimum distance is not smaller than the pre-defined error, e.g., ϵ = 10 10 , proceed to check the next step.
3. Repeat steps 1 and 2, obtaining corresponding z 2 , z 3 , z 4 , , until convergence is reached.

Appendix C. Median Estimates and Confidence Intervals of a and b

Table A1. Median estimates and confidence intervals of a and b (method of moments).
Table A1. Median estimates and confidence intervals of a and b (method of moments).
Ga ( 1 , 0.5 ) Ga ( 2 , 0.5 ) Ga ( 7.5 , 1 )
kRP a ^ m 2 b ^ m 2 a ^ m 2 b ^ m 2 a ^ m 2 b ^ m 2
MSE 1 . 081 ( 0.979 , 1.198 ) 0 . 540 ( 0.484 , 0.608 ) 2 . 166 ( 1.985 , 2.371 ) 0 . 541 ( 0.493 , 0.598 ) 8 . 143 ( 7.548 , 8.820 ) 1 . 086 ( 1.003 , 1.180 )
5NT 1.440 ( 1.336 , 1.555 ) 0.772 ( 0.735 , 0.813 ) 2.728 ( 2.552 , 2.922 ) 0.707 ( 0.670 , 0.748 ) 9.890 ( 9.316 , 10.516 ) 1.332 ( 1.261 , 1.413 )
MC 2.719 ( 2.533 , 2.930 ) 1.585 ( 1.501 , 1.681 ) 5.260 ( 4.934 , 5.632 ) 1.382 ( 1.307 , 1.467 ) 17.210 ( 16.200 , 18.380 ) 2.240 ( 2.114 , 2.387 )
MSE 1 . 011 ( 0.894 , 1.139 ) 0 . 505 ( 0.440 , 0.577 ) 2 . 015 ( 1.815 , 2.238 ) 0 . 504 ( 0.450 , 0.565 ) 7 . 550 ( 6.896 , 8.281 ) 1 . 007 ( 0.916 , 1.108 )
20NT 1.133 ( 1.045 , 1.229 ) 0.576 ( 0.532 , 0.626 ) 2.197 ( 2.034 , 2.377 ) 0.554 ( 0.513 , 0.601 ) 8.050 ( 7.477 , 8.719 ) 1.076 ( 1.000 , 1.167 )
MC 1.180 ( 1.090 , 1.282 ) 0.603 ( 0.563 , 0.650 ) 2.253 ( 2.086 , 2.440 ) 0.568 ( 0.529 , 0.614 ) 8.185 ( 7.586 , 8.868 ) 1.093 ( 1.014 , 1.184 )
MSE 1 . 007 ( 0.888 , 1.135 ) 0 . 503 ( 0.436 , 0.576 ) 2 . 004 ( 1.799 , 2.223 ) 0 . 501 ( 0.446 , 0.561 ) 7 . 516 ( 6.844 , 8.242 ) 1 . 002 ( 0.911 , 1.101 )
100NT 1.039 ( 0.941 , 1.151 ) 0.521 ( 0.467 , 0.584 ) 2.050 ( 1.870 , 2.251 ) 0.513 ( 0.466 , 0.569 ) 7.626 ( 7.013 , 8.322 ) 1.017 ( 0.934 , 1.111 )
MC 1.038 ( 0.942 , 1.149 ) 0.518 ( 0.468 , 0.579 ) 2.043 ( 1.866 , 2.246 ) 0.510 ( 0.464 , 0.564 ) 7.591 ( 6.966 , 8.300 ) 1.011 ( 0.926 , 1.108 )
G a ( a , b ) 1.004 ( 0.885 , 1.131 ) 0.501 ( 0.436 , 0.574 ) 2.004 ( 1.797 , 2.230 ) 0.501 ( 0.446 , 0.563 ) 7.509 ( 6.838 , 8.257 ) 1.001 ( 0.909 , 1.104 )
Table A2. Median estimates and confidence intervals of a and b (MLEs).
Table A2. Median estimates and confidence intervals of a and b (MLEs).
Ga ( 1 , 0.5 ) Ga ( 2 , 0.5 ) Ga ( 7.5 , 1 )
kRP a ^ mle b ^ mle a ^ mle b ^ mle a ^ mle b ^ mle
MSE 1.379 ( 1.305 , 1.459 ) 0.689 ( 0.627 , 0.759 ) 2 . 442 ( 2.298 , 2.599 ) 0 . 610 ( 0.563 , 0.662 ) 8 . 394 ( 7.844 , 9.022 ) 1 . 120 ( 1.039 , 1.209 )
5NT 1 . 243 ( 1.175 , 1.320 ) 0 . 667 ( 0.625 , 0.713 ) 2.535 ( 2.395 , 2.689 ) 0.658 ( 0.619 , 0.701 ) 9.709 ( 9.179 , 10.297 ) 1.308 ( 1.236 , 1.389 )
MC 2.383 ( 2.258 , 2.528 ) 1.354 ( 1.277 , 1.443 ) 4.947 ( 4.683 , 5.253 ) 1.289 ( 1.217 , 1.372 ) 16.959 ( 16.009 , 18.067 ) 2.203 ( 2.078 , 2.349 )
MSE 1.087 ( 1.018 , 1.161 ) 0.543 ( 0.494 , 0.596 ) 2 . 066 ( 1.929 , 2.241 ) 0 . 516 ( 0.474 , 0.566 ) 7 . 589 ( 6.982 , 8.261 ) 1 . 012 ( 0.929 , 1.106 )
20NT 1.057 ( 0.988 , 1.134 ) 0 . 538 ( 0.493 , 0.586 ) 2.116 ( 1.973 , 2.278 ) 0.534 ( 0.494 , 0.580 ) 7.977 ( 7.425 , 8.618 ) 1.067 ( 0.990 , 1.155 )
MC 1 . 083 ( 1.011 , 1.164 ) 0.554 ( 0.508 , 0.607 ) 2.168 ( 2.017 , 2.339 ) 0.548 ( 0.505 , 0.597 ) 8.166 ( 7.571 , 8.846 ) 1.091 ( 1.008 , 1.186 )
MSE 1.022 ( 0.950 , 1.101 ) 0 . 510 ( 0.463 , 0.564 ) 2 . 009 ( 1.857 , 2.177 ) 0 . 501 ( 0.460 , 0.550 ) 7 . 521 ( 6.901 , 8.192 ) 1 . 002 ( 0.919 , 1.096 )
100NT 1 . 017 ( 0.945 , 1.098 ) 0 . 510 ( 0.465 , 0.561 ) 2.025 ( 1.876 , 2.190 ) 0.507 ( 0.466 , 0.555 ) 7.605 ( 7.025 , 8.267 ) 1.015 ( 0.934 , 1.105 )
MC 1.020 ( 0.948 , 1.101 ) 0.509 ( 0.463 , 0.562 ) 2.028 ( 1.876 , 2.199 ) 0.506 ( 0.463 , 0.555 ) 7.604 ( 7.006 , 8.280 ) 1.013 ( 0.930 , 1.106 )
G a ( a , b ) 1.002 ( 0.928 , 1.083 ) 0.501 ( 0.453 , 0.553 ) 2.003 ( 1.849 , 2.176 ) 0.501 ( 0.456 , 0.549 ) 7.512 ( 6.908 , 8.208 ) 1.001 ( 0.918 , 1.097 )

References

  1. Fang, K.T.; Wang, Y. Number-Theoretic Methods in Statistics; Chapman and Hall: London, UK, 1994. [Google Scholar]
  2. Cox, D.R. Note on grouping. J. Am. Stat. Assoc. 1957, 52, 543–547. [Google Scholar] [CrossRef]
  3. Gersho, A.; Gray, R.M. Vector Quantization and Signal Compression; Springer Science & Business Media: New York, NY, USA, 2012; Volume 159. [Google Scholar]
  4. Pagès, G. A space quantization method for numerical integration. J. Comput. Appl. Math. 1998, 89, 1–38. [Google Scholar] [CrossRef]
  5. Pagès, G. Introduction to vector quantization and its applications for numerics. ESAIM Proc. Surv. 2015, 48, 29–79. [Google Scholar] [CrossRef]
  6. Gobet, E.; Pagès, G.; Pham, H.; Printems, J. Discretization and simulation of the Zakai equation. SIAM J. Numer. Anal. 2006, 44, 2505–2538. [Google Scholar] [CrossRef]
  7. El Amri, M.R.; Helbert, C.; Lepreux, O.; Zuniga, M.M.; Prieur, C.; Sinoquet, D. Data-driven stochastic inversion via functional quantization. Stat. Comput. 2020, 30, 525–541. [Google Scholar] [CrossRef]
  8. Fang, K.T.; Zhou, M.; Wang, W.J. Applications of the representative points in statistical simulations. Sci. China Math. 2014, 57, 2609–2620. [Google Scholar] [CrossRef]
  9. Fang, K.T.; He, P.; Yang, J. Set of representative points of statistical distributions and their applications. Sci. Sin. Math. 2020, 50, 1–20. [Google Scholar]
  10. Fang, K.T.; He, S.D. The problem of selecting a specified number of representative points from a normal population. Acta Math. Appl. Sin. 1984, 7, 293–306. [Google Scholar]
  11. Flury, B.A. Principal points. Biometrika 1990, 77, 33–41. [Google Scholar] [CrossRef]
  12. Lloyd, S. Least squares quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
  13. Linde, Y.; Buzo, A.; Gray, R. An algorithm for vector quantizer design. IEEE Trans. Commun. 1980, 28, 84–95. [Google Scholar] [CrossRef]
  14. Tarpey, T. Self-consistency algorithms. J. Comput. Graph. Stat. 1999, 8, 889–905. [Google Scholar]
  15. Chakraborty, S.; Roychowdhury, M.K.; Sifuentes, J. High precision numerical computation of principal points for univariate distributions. Sankhya B 2021, 83, 558–584. [Google Scholar] [CrossRef]
  16. Li, Y.N.; Fang, K.T.; He, P.; Peng, H. Representative points from a mixture of two normal distributions. Mathematics 2022, 10, 3952. [Google Scholar] [CrossRef]
  17. Jiang, J.J.; He, P.; Fang, K.T. An interesting property of the arcsine distribution and its applications. Stat. Probab. Lett. 2015, 105, 88–95. [Google Scholar] [CrossRef]
  18. Xu, L.H.; Fang, K.T.; He, P. Properties and generation of representative points of the exponential distribution. Stat. Pap. 2022, 63, 197–223. [Google Scholar] [CrossRef]
  19. Fei, R.C. Statistical relationship between the representative points and the population. J. Wuxi Inst. Light Ind. 1991, 10, 78–83. [Google Scholar]
  20. Fu, H.H. The problem of selecting a specified number of representative points from a gamma population. J. Min. Sci. Technol. 1985, 107–116. [Google Scholar]
  21. Harrell, F.E.; Davis, C.E. A new distribution-free quantile Estimator. Biometrika 1982, 69, 635–640. [Google Scholar] [CrossRef]
  22. Stampfer, E.; Stadlober, E. Methods for estimating principal points. Commun. Stat.-Simul. Comput. 2002, 31, 261–277. [Google Scholar] [CrossRef]
  23. Elandt-Johnson, R.; Johnson, N. Survival Models and Data Analysis; Wiley Series in Probability and Statistics: New York, NY, USA, 1999. [Google Scholar]
  24. Ozonur, D.; Paul, S. Goodness of fit tests of the two-parameter gamma distribution against the three-parameter generalized gamma distribution. Commun. Stat.-Simul. Comput. 2020, 51, 687–697. [Google Scholar] [CrossRef]
  25. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; Chapman and Hall/CRC: New York, NY, USA, 1994. [Google Scholar]
  26. Trushkin, A. Sufficient conditions for uniqueness of a locally optimal quantizer for a class of convex error weighting functions. IEEE Trans. Inf. Theory 1982, 28, 187–198. [Google Scholar] [CrossRef]
Figure 1. Probability density function for G a ( 1 , 0.5 ) , G a ( 2 , 0.5 ) and G a ( 7.5 , 1 ) .
Figure 1. Probability density function for G a ( 1 , 0.5 ) , G a ( 2 , 0.5 ) and G a ( 7.5 , 1 ) .
Mathematics 11 00927 g001
Table 1. Summary of results from RPs of G a ( 1 , 0.5 ) in parameter estimation.
Table 1. Summary of results from RPs of G a ( 1 , 0.5 ) in parameter estimation.
k μ σ 2 Skewness Kurtosis a ^ m 2 b ^ m 2 PD ¯ a ^ m 2 PD ¯ b ^ m 2 PD ¯ a ^ mle PD ¯ b ^ mle
52.0013.7081.8503.9711.0800.5400.0860.0880.3800.381
MSE-RPs202.0013.9781.9895.7671.0070.5030.0500.0560.0880.089
1002.0013.9982.0035.9961.0020.5010.0500.0560.0360.043
51.8662.4190.775−0.7521.4400.7720.4410.5450.2440.334
NT-RPs201.9673.4191.3941.4701.1320.5760.1340.1530.0600.078
1001.9953.8391.7593.6621.0360.5200.0540.0590.0340.043
52.0693.5160.576−0.8392.7201.5861.8182.3121.5691.939
MC-RPs 1201.9453.6291.3291.5711.2690.6840.3480.4010.2440.327
1001.9913.9871.7513.7971.0510.5360.1680.1810.1080.127
G a ( 1 , 0.5 ) 242610.5----
Table 2. Summary of results from RPs of G a ( 2 , 0.5 ) in parameter estimation.
Table 2. Summary of results from RPs of G a ( 2 , 0.5 ) in parameter estimation.
k μ σ 2 Skewness Kurtosis a ^ m 2 b ^ m 2 PD ¯ a ^ m 2 PD ¯ b ^ m 2 PD ¯ a ^ mle PD ¯ b ^ mle
53.9997.3961.3041.6852.1630.5410.0850.0860.2220.223
MSE-RPs203.9997.9541.4032.8422.0110.5030.0440.0480.0420.046
1003.9997.9971.4122.9762.0010.5000.0440.0470.0330.037
53.8555.4490.552−0.9342.7270.7070.3650.4160.2690.317
NT-RPs203.9637.1491.0100.5322.1960.5540.1010.1120.0610.071
1003.9937.7801.2661.8252.0490.5130.0440.0480.0340.038
54.0896.9120.416−0.9045.2611.3821.7691.9251.6721.800
MC-RPs203.9177.2690.9690.7552.4250.6330.3100.3390.2600.304
1003.9817.9821.2661.9302.0740.5250.1460.1530.1120.120
G a ( 2 , 0.5 ) 481.414320.5----
Table 3. Summary of results from RPs of G a ( 7.5 , 1 ) in parameter estimation.
Table 3. Summary of results from RPs of G a ( 7.5 , 1 ) in parameter estimation.
k μ σ 2 Skewness Kurtosis a ^ m 2 b ^ m 2 PD ¯ a ^ m 2 PD ¯ b ^ m 2 PD ¯ a ^ mle PD ¯ b ^ mle
57.4996.9110.6720.0248.1391.0850.0880.0880.1210.121
MSE-RPs207.4997.4550.7240.7117.5451.0060.0380.0390.0360.037
1007.4997.4980.7300.7957.5021.0000.0380.0400.0350.036
57.4245.5750.284−1.0679.8851.3320.3190.3330.2960.309
NT-RPs207.4806.9460.530−0.2138.0561.0770.0760.0790.0670.070
1007.4967.3740.6620.3817.6201.0170.0380.0390.0360.037
57.5756.3940.225−0.95019.2822.5611.4801.4311.4801.422
MC-RPs207.4166.9010.4940.0768.7541.1870.2860.2960.2770.290
1007.4787.4900.6700.4437.7141.0340.1270.1270.1160.116
G a ( 7.5 , 1 ) 7.57.50.7300.87.51----
Table 4. Mean (Standard deviation) of MLEs from samples and HD standardized samples.
Table 4. Mean (Standard deviation) of MLEs from samples and HD standardized samples.
Ga ( 1 , 0.5 ) Ga ( 2 , 0.5 ) Ga ( 7.5 , 1 )
n SampleHD-SampleSampleHD-SampleSampleHD-Sample
50 a ^ 1.060(0.195)1.075(0.192)2.115(0.415)2.104(0.405)7.964(1.645)7.794(1.604)
b ^ 0.540(0.127)0.535(0.126)0.534(0.118)0.524(0.116)1.064(0.227)1.038(0.221)
200 a ^ 1.021(0.090)1.020(0.090)2.029(0.192)2.012(0.189)7.616(0.759)7.502(0.746)
b ^ 0.512(0.058)0.507(0.058)0.508(0.054)0.502(0.054)1.016(0.104)0.999(0.103)
500 a ^ 1.012(0.056)1.010(0.056)2.010(0.120)2.000(0.119)7.543(0.472)7.477(0.468)
b ^ 0.507(0.036)0.504(0.036)0.503(0.034)0.499(0.033)1.006(0.065)0.997(0.064)
Table 5. Summary of results for MSE-RPs from the esitmated gamma distributions.
Table 5. Summary of results for MSE-RPs from the esitmated gamma distributions.
n = 200 k = 20 μ σ 2 SkewnessKurtosis a ^ m 2 b ^ m 2
sample1.9953.8731.9665.6411.0280.515
G a ( 1 , 0.5 ) HD-sample2.0133.9461.9675.6471.0270.510
origin242610.5
sample3.9947.8181.3932.8012.0410.511
G a ( 2 , 0.5 ) HD-sample4.0087.9381.3992.8252.0230.505
origin481.414320.5
sample7.4967.3340.7190.6997.6621.022
G a ( 7.5 , 1 ) HD-sample7.5097.4720.7240.7117.5471.005
origin7.57.50.7300.87.51
Table 6. Mean (Standard deviation) of resampled MLEs from samples and HD standardized samples.
Table 6. Mean (Standard deviation) of resampled MLEs from samples and HD standardized samples.
Ga ( 1 , 0.5 ) Ga ( 2 , 0.5 ) Ga ( 7.5 , 1 )
n( n r ) SampleHD-SampleSampleHD-SampleSampleHD-Sample
50 a ^ 1.114(0.295)1.127(0.283)2.234(0.630)2.216(0.594)8.446(2.504)8.245(2.340)
b ^ 0.577(0.196)0.572(0.185)0.569(0.181)0.558(0.170)1.131(0.347)1.101(0.323)
200 a ^ 1.034(0.131)1.030(0.127)2.059(0.279)2.034(0.269)7.741(1.102)7.594(1.061)
b ^ 0.522(0.085)0.515(0.082)0.518(0.079)0.509(0.076)1.034(0.151)1.013(0.146)
500 a ^ 1.017(0.079)1.016(0.079)2.021(0.167)2.012(0.168)7.584(0.660)7.527(0.662)
b ^ 0.510(0.051)0.508(0.051)0.506(0.047)0.503(0.047)1.012(0.091)1.004(0.091)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ke, X.; Wang, S.; Zhou, M.; Ye, H. New Approaches on Parameter Estimation of the Gamma Distribution. Mathematics 2023, 11, 927. https://doi.org/10.3390/math11040927

AMA Style

Ke X, Wang S, Zhou M, Ye H. New Approaches on Parameter Estimation of the Gamma Distribution. Mathematics. 2023; 11(4):927. https://doi.org/10.3390/math11040927

Chicago/Turabian Style

Ke, Xiao, Sirao Wang, Min Zhou, and Huajun Ye. 2023. "New Approaches on Parameter Estimation of the Gamma Distribution" Mathematics 11, no. 4: 927. https://doi.org/10.3390/math11040927

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop