Next Article in Journal
Highly Efficient Numerical Integrator for the Circular Restricted Three-Body Problem
Previous Article in Journal
The Directional Derivative in General Quantum Calculus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Likelihood Inference Procedures for the Logistic Distribution

Statistics Program, Department of Mathematics, Statistics and Physics, College of Arts and Science, Qatar University, Doha 2713, Qatar
Symmetry 2022, 14(9), 1767; https://doi.org/10.3390/sym14091767
Submission received: 18 January 2022 / Revised: 28 January 2022 / Accepted: 10 February 2022 / Published: 25 August 2022

Abstract

:
We consider third-order likelihood inferences for the parameters, quantiles and reliability function of the logistic distribution. This theory involves the conditioning and marginalization of the likelihood function. The logistic distribution is a symmetric distribution which is closely related to normal distributions, and which has several applications because of its mathematical tractability and the availability of a closed-form cumulative distribution function. The performance of the third-order techniques is investigated and compared with the first-order techniques using simulations. The results show that the third-order techniques are far more accurate than the usual first-order inference procedures. This results in more accurate inferences about the functions of the parameters of the distribution, which leads to more precise conclusions about the phenomenon modeled by the logistic distribution.

1. Introduction

The logistic distribution is a continuous distribution with many important applications in various fields, including logistic regression, logit models, neural networks, and finance. The logistic distribution is symmetrical, and it is very close in shape to the normal distribution, but it has heavier tails. The symmetry of the logistic distribution allows it to provide a better model than the normal distribution for applications such as extreme events. The logistic model has been applied earlier as a growth model in human populations and in some biological organisms. See [1] for more details and relevant references. Balakrishnan [2] provided a detailed account of this distribution as well.
Inference procedures for the parameters of the logistic distribution and the related quantities have received considerable attention in the literature. For example, the construction of confidence intervals for the parameters of this distribution was discussed by [3], who determined the necessary percentage points of the pivotal quantities through Monte Carlo simulations. Several other authors have worked on various aspects of this distribution and the related log-logistic distribution, including [4,5] among others.
Inferences about the functions of the parameters of the logistic distribution are the key focus of several scientific investigations. The functions could be a distribution quantile or a reliability function, which are needed in industrial testing to determine the reliability of products and warranty periods (see [6] for detailed examples on the importance of these and other functions in reliability studies). The inferences are usually based on the maximum likelihood estimator and the related likelihood quantities. These inferences are justified by the asymptotic properties of likelihood inference procedures, and require large samples for their validity and accuracy. Recent relevant research on the maximum likelihood estimation in various models of interest includes [4,5,7]. In this paper, we shall derive higher-order likelihood inference procedures for the parameters, quantiles, and reliability function of this distribution. The methods are based on the third-order inference procedures developed by [8]. The third-order inference procedures usually need smaller sample sizes to achieve the desired accuracy and validity, which leads to a reduction in the cost and time of the scientific investigation. The methods are developed in Section 2. A simulation study is conducted in Section 3 to investigate the performance of the third-order inference procedures and to make sure that they achieve the desired objectives. The results and conclusions are given in the final section.

2. Likelihood Inference

Likelihood theory and higher-order refinements have received considerable attention in the literature. Barndorff-Nielsen and Cox [9] and Severini [10] have discussed this topic in detail. One of the most important lines of research is the likelihood ratio statistic and its refinements and modifications. Fraser and Reid [11] anf Fraser et al. [8] have developed third-order refinements to the likelihood ratio statistic. These modifications were further investigated for several statistical models, including the Weibull distribution [12] and lognormal regression [13], among others. It appears that little attention has been paid so far to developing third-order likelihood inferences for the logistic distribution and the related functions of parameters. Since this model has important applications in various fields, this work is an attempt to fill this gap.
Let y be a vector of length n observations from a continuous statistical model with joint density f y , θ . Let θ be a parameter vector of length p . Consider the inference for a scalar parameter of interest ψ = ψ θ . Third-order inference procedures use modifications of the first-order statistic given by:
q m = ψ ^ ψ σ ^ ψ
where ψ ^ is the MLE of ψ .
Let j ^ θ θ be the observed information matrix and let j ^ θ θ be its inverse. Let j ^ ψ ψ be the element in the inverse of the information matrix corresponding to the scalar parameter ψ , and let ψ θ θ ^ be the gradient of the interest parameter ψ evaluated at the MLE θ ^ , and then σ ^ ψ is obtained from:
σ ^ ψ 2 = j ^ ψ ψ = ψ θ θ ^ j ^ θ θ ψ θ θ ^
The signed square root of the likelihood ratio statistic r is defined as:
r = s g n ψ ^ ψ 2 l θ ^ l θ ^ ψ 1 / 2
where θ ^ ψ is the restricted MLE of θ for a given ψ . Both q m and r have p-values that are accurate to the first order O n 1 / 2 . Fraser et al. [8] proposed simple and widely applicable formulas to find a quantity Q , such that the resulting approximations to the p-value have third-order accuracy O n 3 / 2 . The approach used by [8] for the construction of Q involves dimension reduction from n to p by conditioning it on an approximate ancillary statistic. The resulting model is then approximated by a member of the exponential family, called the tangent exponential model (see [9]). The canonical parameter of the tangent exponential model is obtained by the differentiation of the log-likelihood function with respect to the data (the sample space derivative). The nuisance parameter is then eliminated by marginalization.
Skovgaard [14] and Fraser an Reid [11] showed that only the second-order ancillary is required to achieve third-order accuracy. Let y 0 be the observed data point and let θ ^ 0 be the corresponding maximum likelihood estimate. Consider a vector z = z 1 , . . ,   z n ’ of pivotal quantities denoted by z i = z i y i , θ , and the array V of “ancillary directions” is obtained from the pivotal z y , θ by:
V = y θ y 0 , θ ^ 0 = z y 1 z θ y 0 , θ ^ 0
where the expression is calculated for a fixed z . The second-order ancillary generated at the data point y 0 is free of the data point of the second order. The local canonical parameter of the tangent exponential model is given by:
φ θ = l θ x V
The quantity Q given by:
Q = s g n ψ ^ 0 ψ χ θ ^ 0 χ θ ^ ψ 0 σ ^ χ φ
which is a standardized maximum likelihood departure in the parameterization χ θ , where:
χ θ = ψ φ θ ^ ψ 0 ψ φ θ ^ ψ 0 φ θ
where θ ^ ψ 0 is the constrained maximum likelihood value based on the data point y 0 , and the gradient ψ φ θ of ψ θ with respect to φ θ is calculated as:
ψ φ θ =   ψ θ θ   φ θ θ 1 = ψ φ θ φ θ 1 θ
In the next section, we will apply this methodology to the logistic distribution, where we will consider inferences about the parameters, quantiles, and reliability function.

3. Third-Order Likelihood Inference in the Logistic Distribution

The probability density function of the logistic distribution with location parameter µ and scale parameter σ is given by:
f x , µ , σ = e x µ σ σ 1 + e x µ σ 2 , < x < , < µ < , σ > 0 .
The corresponding cumulative distribution function is given by:
F x , µ , σ = 1 1 + e x µ σ , < x < , < µ < , σ > 0 .
Let x 1 , , x n be a random sample of size n from this distribution. The likelihood function of θ = µ , σ is given by:
L θ = L µ , σ = i = 1 n e x i µ σ σ 1 + e x i µ σ 2 = σ n e i = 1 n x i µ σ i = 1 n 1 + e x i µ σ 2 ,
The corresponding log-likelihood function is given by:
l θ = l µ , σ = n l n σ i = 1 n x i µ σ 2 i = 1 n l n 1 + e x i µ σ .
To obtain the maximum likelihood estimator and the associated quantities, and to develop the third-order inference techniques, we need the following first- and second-order partial derivatives:
l µ θ = n σ 2 i = 1 n 1 σ e x i µ σ 1 + e x i µ σ ,
l σ θ = n σ + i = 1 n x i µ σ 2 2 i = 1 n x i µ σ 2 e x i µ σ 1 + e x i µ σ ,
l µ µ θ = 2 i = 1 n 1 σ 2 e x i µ σ 1 + e x i µ σ 1 σ 2 e 2 x i µ σ 1 + e x i µ σ 2 = 2 i = 1 n 1 σ 2 e x i µ σ 1 + e x i µ σ 2 ,
l µ σ θ = n σ 2 2 i = 1 n 1 σ 2 e x i µ σ 1 e x i µ σ + x i µ σ 1 + e x i µ σ 2 ,
l σ σ θ = n σ 2 + i = 1 n 2 x i µ σ 3 2 i = 1 n 2 x i µ σ 3 e x i µ σ + x i µ σ 2 2 e x i µ σ 1 + e x i µ σ x i µ σ 2 2 e x i µ σ 1 + e x i µ σ 2 .
The array of ancillary directions V , obtained from (4), is given by:
V = V 1 , V 2 = 1         x 1 µ ^ σ ^ 1         x 2 µ ^ σ ^ 1         x n µ ^ σ ^ .
Define the Lagrangian function as follows (see [12]):
H θ = l θ + λ ψ θ ψ 0
The first-order conditions are:
H µ θ = l µ θ + λ ψ µ θ , H σ θ = l σ θ + λ ψ σ θ , H λ θ = ψ θ ψ 0
The tilted log-likelihood function is defined as:
l ˜ θ = l θ + λ ˜ ψ θ ψ 0
where λ ˜ is the value of the Lagrange multiplier. Now we will consider some specific cases.

3.1. Inference about the Scale Parameter

We have ψ 0 = σ 0 ; therefore, the Lagrangian and its derivatives become:
H θ = l θ + λ σ σ 0 ,   H µ θ = l µ θ ,   H σ θ = l σ θ + λ ,   H λ θ = σ σ 0 .
It follows that λ ˜ = l σ θ ˜ , where θ ˜ = θ ^ ψ 0 , and the tilted likelihood becomes:
l ˜ θ = l θ + λ ˜ σ σ 0
We have:
l ˜ µ θ = l µ θ ,   l ˜ µ µ θ = l µ µ θ ,   l ˜ µ σ θ = l µ σ θ ,   l ˜ σ θ = l σ µ , σ + λ ˜ ,   l ˜ σ σ θ = l σ σ µ , σ
This implies that:
j ˜ θ θ θ ˜ = j θ θ θ ˜
Recall from (5) that the local canonical parameter is given by:
φ θ = l θ x V
For the logistic distribution, consider:
l θ = n l n σ i = 1 n x i µ σ 2 i = 1 n l n 1 + e x i µ σ
l θ x = l θ x 1 , , l θ x n
Therefore:
l θ x = l θ x 1 , , l θ x n
It follows that:
l θ x V 1 = n σ + 2 i = 1 n 1 σ e x i µ σ 1 + e x i µ σ = l µ θ
Using (5), the components of the local canonical parameter φ θ are given by:
φ 1 θ = n σ + 2 i = 1 n 1 σ e x i µ σ 1 + e x i µ σ
φ 2 θ = i = 1 n 1 σ x i µ ^ σ ^ + 2 i = 1 n 1 σ e x i µ σ 1 + e x i µ σ x i µ ^ σ ^
We also need to find:
φ θ θ = φ 1 µ θ φ 1 σ θ φ 2 µ θ φ 2 σ θ
where:
φ 1 µ θ = 2 i = 1 n 1 σ 2 e x i µ σ 1 + e x i µ σ 2 = l µ µ θ = j µ µ θ
φ 1 σ θ = n σ 2 + 2 i = 1 n 1 σ 2 e x i µ σ 1 e x i µ σ + x i µ σ 1 + e x i µ σ 2 = l µ σ θ = j µ σ θ ,
φ 2 θ   is   given   by   Equation   ( 22 )   φ 2 µ θ = 2 i = 1 n 1 σ 2 e x i µ σ 1 + e x i µ σ 2 x i µ ^ σ ^ ,
φ 2 σ θ = i = 1 n 1 σ 2 x i µ ^ σ ^ + 2 i = 1 n 1 σ 2 e x i µ σ 1 e x i µ σ + x i µ σ 1 + e x i µ σ 2 x i µ ^ σ ^ .
Now let ψ θ θ = ψ µ θ , ψ σ θ , and in the present case we have ψ θ θ = 0 ,   1 . We obtain:
χ θ = ψ θ θ ^ ψ 0 φ θ 1 θ ^ ψ 0 φ θ
An estimate of the variance of χ θ is given by:
σ ^ χ 2 = ψ θ θ ˜ j ˜ θ θ θ ˜ ψ θ θ ˜
The signed likelihood departure is given by:
R = s g n ψ ^ ψ 0 2 l θ ^ l θ ˜ 1 / 2
The maximum likelihood departure is given by:
Q = s g n ψ ^ ψ 0 χ θ ^ χ θ ˜ j ^ θ θ θ ^ φ θ θ ^ 2 σ ^ χ 2 j ˜ θ θ θ ˜ φ θ θ ˜ 2 1 / 2
The Lugannani and Rice [15] approximation of the tail probability is given by:
Φ R + ϕ R 1 R 1 Q ,
where the functions ϕ and Φ denote the probability density function and the cumulative distribution function of the standard normal distribution. The Barndorff-Nielsen [16] formula for the tail probability is given by:
Φ R R 1 l n R Q ,

3.2. Inference about the Location Parameter

We have ψ 0 = µ 0 , and therefore:
H θ = l θ + λ µ µ 0
with H µ θ = l µ θ + λ , H σ θ = l σ θ , and H λ θ = µ µ 0 .
Equating these equations to zero and solving simultaneously, it follows that λ ˜ = l µ µ 0 , σ ˜ , and the tilted likelihood becomes:
l ˜ θ = l θ + λ ˜ µ µ 0
We have:
l ˜ µ θ = l µ θ + λ ˜ ,   l ˜ µ µ θ = l µ µ θ ,   l ˜ µ σ θ = l µ σ θ ,   l ˜ σ θ = l σ θ ,   l ˜ σ σ θ = l σ σ θ .
This implies that:
j ˜ θ θ θ ˜ = j θ θ θ ˜
We also have ψ θ θ = 1 ,   0 .

3.3. Inference about the Quantiles

The α t h quantile of the logistic distribution, x p , is given by the solution of the equation:
x p = µ σ l n p 1 1
Therefore, the interest parameter is of the form ψ = µ + A σ , where A = σ l n p 1 1 .
H θ = l θ + λ µ + A σ ψ 0
We have:
H µ θ = l µ θ + λ ,   H σ θ = l σ θ + λ A ,   H λ θ = µ + A σ ψ 0
It follows that:
λ ˜ = l µ θ ˜ ,   l σ θ ˜ = A l µ θ ˜ ,   µ ˜ = ψ 0 A σ ˜ .
The tilted likelihood becomes:
l ˜ θ = l θ + λ ˜ µ + A σ ψ 0 .
With:
l ˜ µ θ = l µ θ + λ ˜ ,   l ˜ µ µ θ = l µ µ θ ,   l ˜ µ σ θ = l µ σ θ ,   l ˜ σ θ = l σ θ + λ ˜ A ,   l ˜ σ σ θ = l σ σ θ .
This implies that:
j ˜ θ θ θ ˜ = j θ θ θ ˜ .
The remaining quantities are the same as case 1 with ψ θ θ = 1 ,   A .

3.4. Inference about the Reliability Function

The reliability function of the logistic distribution at time t is given by:
R t = 1 F t = e t µ σ 1 + e t µ σ , < t <
Note that R t 1 R t = e t µ σ t µ σ = l n R t 1 R t = B . Therefore, the interest parameter is reduced to ψ = t µ σ . The Lagrangian and its derivatives for this situation are given by:
H θ = l θ + λ t µ σ ψ 0 ,   H µ θ = l µ θ λ σ ,   H σ θ = l σ θ λ t µ σ 2 ,   H λ θ = t µ σ ψ 0 .
By equating the derivatives to zero and solving, we obtain:
l µ θ ˜ = λ ˜ σ ˜ ,   l σ θ ˜ = λ ˜ t µ ˜ σ ˜ 2 = λ ˜ σ ˜ t µ ˜ σ ˜ = l µ θ ˜ t µ ˜ σ ˜ ,   t µ ˜ σ ˜ = ψ 0 .
It follows that the tilted likelihood becomes:
l ˜ θ = l θ + λ ˜ t µ σ ψ 0 .
We have:
l ˜ µ θ = l µ θ λ ˜ σ ,   l ˜ µ µ θ = l µ µ θ ,   l ˜ µ σ θ = l µ σ θ + λ ˜ σ 2 ,   l ˜ σ θ = l σ θ λ ˜ t µ σ 2 , l ˜ σ σ θ = l σ σ θ + 2 λ ˜ t µ σ 3 ,
The remaining quantities are the same as case 1 with ψ θ θ = 1 σ ,   t µ σ 2 .

4. Simulation Study

A simulation study was conducted to investigate the performance of the intervals described above. We used sample sizes of n = 10, 20, 30, 40, 50, 70, and 100. For the quantile case, we used p = 0.25 ,   0.5 ,   0.75 . For the reliability function, we used t = F 1 q ,   q = 0.25 ,   0.5 ,   0.75 , where F 1 is the inverse cumulative distribution function of the standard logistic distribution. We used µ = 0 , σ = 1 for the sample generation. For each combination of the simulation indices, we generated 10,000 samples. For each sample, we computed the Wald interval and the LRT interval, in addition to the intervals based on the Barndorff-Nielsen and Lugannani and Rice modifications of the LRT that are based on the likelihood departure Q derived by [8]. The confidence coefficient or probability content 1 α is taken as 0.90, 0.95, and 0.99. The results of our simulations are given in Table 1, Table 2, Table 3 and Table 4.

5. Results and Conclusions

The results for the confidence intervals for the location parameter µ are given in Table 1. It appears that the LR intervals need at least a sample size of 20 for the confidence coefficients of 90% and 95%, and sample sizes of at least 30 for intervals with a confidence coefficient of 99%, to achieve reasonable accuracy in terms of the observed coverage probability. The Wald intervals need even larger sample sizes to have a reasonable performance. Wald intervals tend to be generally conservative for small sample sizes, while the LR interval tends to be anti-conservative. On the other hand, the third-order confidence intervals BN and LGR have a satisfactory performance, even for sample sizes as small as 10. The performances of the BN interval and the LGR interval are quite similar.
A similar pattern is observed for intervals for the scale parameter σ . When the sample size is small, such as 10 or 20, the Wald intervals tend to be highly anti-conservative. The refined likelihood ratio intervals (BN and LGR) have a quite satisfactory performance, even for samples of size 10, and for all values of α under study. A similar performance is observed for quantiles. However, for the reliability function, all intervals have a satisfactory performance, while the BN and LGR intervals have the closest coverage probability to the nominal one.
It appears that the third-order procedures are very effective in improving the coverage probability of the LR intervals, and they give very accurate results, even for samples as small as 10 observations only. This is very desirable in situations where it is difficult or costly to obtain large samples.
The work in this paper achieved its goal of developing more accurate likelihood inference procedures that need smaller sample sizes for its validity, resulting in more precise inferences about the quantities of practical importance in applications, such as the distribution quantiles and the reliability function. Other functions of importance can be treated using the same general formulation followed in this paper. The consequences of applying third-order methods are a reduction in the cost and the time needed for obtaining the samples, while still obtaining precise and accurate conclusions about the phenomenon under study.

Funding

Open access funding provided by the Qatar National Library.

Acknowledgments

The author would like to thank the referees for their thoughtful comments and suggestions that improved the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions, 2nd ed.; John Wiley and Sons: New York, NY, USA, 1995; Volume 2. [Google Scholar]
  2. Balakrishnan, N. Handbook of the Logistic Distribution; Chapman and Hall: London, UK, 1992. [Google Scholar]
  3. Antle, C.; Klimko, L.; Harkness, W. Confidence intervals for the parameters of the logistic distribution. Biometrika 1970, 57, 397–402. [Google Scholar] [CrossRef]
  4. Asgharzadeh, A.; Valiollahi, R.; Mousa, A. Point and interval estimation for the logistic distribution based on record data. SORT-Stat. Oper. Res. Trans. 2016, 40, 89–112. [Google Scholar]
  5. He, X.; Chen, W.; Qian, W. Maximum likelihood estimators of the parameters of the log-logistic distribution. Stat. Pap. 2020, 61, 1875–1892. [Google Scholar] [CrossRef]
  6. Meeker, W.; Escobar, L. Statistical Methods for Reliability Data; Wiley Inter-Science: Hoboken, NJ, USA, 1998. [Google Scholar]
  7. Chen, W.X.; Long, C.X.; Yang, R.; Yao, D.S. Maximum likelihood estimator of the location parameter under moving extremes ranked set sampling design. Acta Math. Appl. Sin. (Engl. Ser.) 2021, 37, 101–108. [Google Scholar] [CrossRef]
  8. Fraser, D.A.S.; Reid, N.; Wu, J. A simple general formula for tail probabilities for frequentist and Bayesian inference. Biometrika 1999, 86, 249–264. [Google Scholar] [CrossRef]
  9. Barndorff-Nielsen, O.E.; Cox, D.R. Inference and Asymptotics; Chapman and Hall: London, UK, 1994. [Google Scholar]
  10. Severini, T. Likelihood Methods in Statistics; Oxford University Press: London, UK, 2000. [Google Scholar]
  11. Fraser, D.A.S.; Reid, N. Ancillaries and third-order significance. Utilitas Math. 1995, 7, 33–53. [Google Scholar]
  12. Rekkas, M.; Wong, A. Third-order inference for the Weibull distribution. Comput. Stat. Data Anal. 2005, 49, 499–525. [Google Scholar] [CrossRef]
  13. Tarng, C. Third-order likelihood-based inference for the log-normal regression model. J. Appl. Stat. 2014, 41, 1976–1988. [Google Scholar] [CrossRef]
  14. Skovgaard, I.M. Successive improvements of the order of ancillarity. Biometrika 1986, 73, 516–519. [Google Scholar] [CrossRef]
  15. Lugannani, R.; Rice, S. Saddlepoint approximation for the distribution function of the sum of independent variables. Adv. Appl. Probab. 1980, 12, 475–490. [Google Scholar] [CrossRef]
  16. Barndorff-Nielsen, O.E. Modified signed log likelihood ratio. Biometrika 1991, 78, 557–563. [Google Scholar] [CrossRef]
Table 1. Coverage Probabilities for the Confidence Intervals for µ .
Table 1. Coverage Probabilities for the Confidence Intervals for µ .
n α = 0.01 α = 0.05 α = 0.10
WaldLRBNLGRWaldLRBNLGRWaldLRBNLGR
100.0310.0160.0110.0120.0900.0660.0500.0510.1470.1250.1040.104
200.0180.0120.0100.0110.0690.0570.0490.0490.1180.1070.0960.096
300.0160.0120.0100.0100.0610.0550.0510.0510.1130.1060.1000.100
400.0140.0110.0100.0110.0610.0560.0530.0530.1120.1070.1020.102
500.0130.0100.0100.0100.0580.0550.0520.0520.1100.1050.1000.100
700.0100.0090.0090.0090.0530.0490.0480.0480.1020.0990.0960.096
1000.0110.0110.0110.0120.0530.0510.0510.0510.1110.1090.1070.107
Table 2. Coverage Probabilities for the Confidence Intervals for σ .
Table 2. Coverage Probabilities for the Confidence Intervals for σ .
n α = 0.01 α = 0.05 α = 0.10
WaldLRBNLGRWaldLRBNLGRWaldLRBNLGR
100.0790.0170.0110.0110.1380.0710.0540.0540.1830.1290.1050.105
200.0420.0140.0130.0130.0860.0580.0510.0510.1340.1080.1010.101
300.0300.0120.0130.0130.0780.0560.0540.0530.1290.1080.1010.102
400.0260.0120.0110.0110.0650.0520.0490.0500.1130.1020.0970.097
500.0260.0120.0110.0110.0720.0570.0530.0530.1150.1070.1040.104
700.0200.0100.0100.0100.0640.0540.0550.0560.1190.1070.1070.108
1000.0170.0110.0110.0110.0600.0530.0520.0530.1120.1060.1030.103
Table 3. Coverage Probabilities for the Confidence Intervals for Quantiles x p .
Table 3. Coverage Probabilities for the Confidence Intervals for Quantiles x p .
p n α = 0.01 α = 0.05 α = 0.10
WaldLRBNLGRWaldLRBNLGRWaldLRBNLGR
0.25100.0420.0150.0110.0110.0950.0650.0490.0490.1440.1190.0980.098
0.25200.0270.0140.0100.0100.0720.0580.0520.0520.1230.1090.1000.100
0.25300.0200.0130.0110.0120.0660.0560.0510.0510.1170.1060.0990.099
0.25400.0180.0110.0110.0120.0600.0540.0510.0510.1110.1040.1010.101
0.25500.0150.0100.0100.0100.0590.0540.0510.0520.1080.1030.1000.100
0.25700.0140.0100.0100.0110.0570.0520.0500.0500.1090.1050.1010.102
0.251000.0130.0100.0110.0110.0510.0480.0480.0480.0990.0980.0980.099
0.5100.0290.0140.0090.0090.0820.0620.0480.0480.1390.1170.0950.095
0.5200.0190.0110.0090.0090.0660.0560.0490.0490.1220.1110.0990.100
0.5300.0170.0120.0110.0110.0640.0580.0530.0530.1160.1080.1020.102
0.5400.0140.0120.0110.0110.0590.0520.0500.0500.1160.1090.1040.104
0.5500.0130.0110.0120.0120.0570.0530.0520.0520.1030.0990.0970.097
0.5700.0130.0110.0100.0110.0550.0520.0500.0510.1080.1050.1040.104
0.51000.0100.0090.0100.0110.0500.0490.0490.0500.0990.0970.0960.097
0.75100.0410.0150.0110.0120.0900.0620.0480.0480.1390.1160.0970.097
0.75200.0230.0110.0100.0100.0750.0570.0510.0510.1290.1110.1020.103
0.75300.0210.0120.0110.0110.0700.0580.0540.0550.1240.1130.1040.104
0.75400.0190.0130.0120.0120.0640.0570.0540.0540.1120.1040.1010.101
0.75500.0180.0120.0110.0120.0580.0520.0510.0520.1100.1030.1000.101
0.75700.0160.0120.0130.0130.0580.0530.0520.0520.1060.1000.0990.099
0.751000.0120.0100.0120.0120.0530.0510.0510.0510.1010.1010.1010.101
Table 4. Coverage Probabilities for the Confidence Intervals for the Reliability Function R t .
Table 4. Coverage Probabilities for the Confidence Intervals for the Reliability Function R t .
t n α = 0.01 α = 0.05 α = 0.10
WaldLRBNLGRWaldLRBNLGRWaldLRBNLGR
−1.1100.0070.0150.0100.0100.0510.0680.0510.0520.1060.1260.1020.103
−1.1200.0090.0120.0110.0110.0480.0550.0480.0490.1000.1070.0980.098
−1.1300.0080.0100.0090.0090.0490.0520.0490.0490.0990.1040.0970.097
−1.1400.0100.0120.0110.0110.0510.0550.0510.0510.1040.1070.1040.105
−1.1500.0120.0130.0130.0140.0550.0570.0560.0560.1050.1080.1050.105
−1.1700.0100.0110.0100.0110.0480.0510.0490.0490.0970.0990.0970.098
−1.11000.0090.0100.0100.0110.0520.0530.0530.0530.1020.1040.1030.103
0100.0060.0140.0100.0100.0520.0670.0510.0510.1100.1270.1030.104
0200.0100.0130.0120.0120.0510.0580.0510.0510.1040.1100.0990.099
0300.0090.0110.0100.0100.0520.0550.0520.0520.1030.1080.1020.102
0400.0110.0120.0110.0120.0490.0520.0490.0490.0980.1010.0970.098
0500.0100.0120.0110.0110.0530.0550.0530.0530.1030.1060.1020.102
0700.0090.0100.0100.0100.0490.0510.0500.0500.0990.1010.0980.098
01000.0100.0100.0110.0110.0480.0490.0490.0490.0980.0990.0980.098
1.1100.0060.0130.0090.0090.0490.0670.0510.0510.1080.1270.1040.104
1.1200.0080.0120.0100.0100.0490.0560.0500.0510.1040.1130.1030.103
1.1300.0080.0100.0090.0100.0470.0520.0480.0490.0960.1010.0960.097
1.1400.0110.0120.0120.0130.0510.0540.0530.0530.1020.1050.1020.103
1.1500.0080.0090.0090.0100.0510.0550.0520.0520.1000.1040.1010.101
1.1700.0100.0120.0110.0120.0480.0500.0480.0490.0990.1020.0980.099
1.11000.0110.0100.0110.0120.0490.0500.0500.0500.1020.1050.1010.101
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Baklizi, A. Improved Likelihood Inference Procedures for the Logistic Distribution. Symmetry 2022, 14, 1767. https://doi.org/10.3390/sym14091767

AMA Style

Baklizi A. Improved Likelihood Inference Procedures for the Logistic Distribution. Symmetry. 2022; 14(9):1767. https://doi.org/10.3390/sym14091767

Chicago/Turabian Style

Baklizi, Ayman. 2022. "Improved Likelihood Inference Procedures for the Logistic Distribution" Symmetry 14, no. 9: 1767. https://doi.org/10.3390/sym14091767

APA Style

Baklizi, A. (2022). Improved Likelihood Inference Procedures for the Logistic Distribution. Symmetry, 14(9), 1767. https://doi.org/10.3390/sym14091767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop