*1.2. A Solution: Sequential Statistic Framework*

The proposed distribution compares all the sample variances before a certain point (where the potential shift occurs), with all sample variances after the time of the shift. In essence, the multi-sample hypothesis testing problem is approached by using *m* sequential two sample tests; described as:

$$\begin{array}{ll} S\_0^2 & \text{is compared with} & S\_1^2, S\_2^2, \dots, S\_m^2\\ S\_{0'}^2 S\_1^2 & \text{is compared with} & S\_{2'}^2, S\_{3'}^2, \dots, S\_m^2\\ & \text{and so forth until} &\\ S\_{0'}^2 S\_{1'}^2 \cdot \cdot \cdot \cdot, S\_{m-1}^2 & \text{is compared with} & S\_m^2. \end{array} \tag{5}$$

Note that, due to the assumption that we have *m* + 1 samples, the procedure to identify the possible change in the variance requires *m* comparisons. Of these *m* comparisons the one that leads to the largest disparity between the sample variables on the left and the right of (5) will indicate the most likely position where the process experienced a change in variance. In essence, what is then needed is to quantify the difference between the sample variances of the left and right of (5), and then determine which of the *m* different comparisons has the largest difference between the sample variances, and finally to use some measure to determine if this maximum difference is within some set tolerance range. Our proposed mathematical construct on how to achieve this is discussed for the remained of this introduction. Assuming that no shift in the process variance has occurred, it is possible to construct a series of two sample statistics that correspond to the general procedure described in (5). Each statistic corresponds to whether at sample *r* = *κ* the two independent samples (the sample variances before time *r* and the sample variances after and including time *r*) are from normal distributions with the same unknown variance *σ*2. This can alternatively be viewed as testing whether *σ*<sup>2</sup> = *σ*<sup>2</sup> <sup>1</sup> , which is similar to testing *λ* = 1. As such, it follows that detecting a shift in the process variance can be reduced to the following hypothesis test:

$$H\_0: \sigma^2 = \sigma\_1^2 \text{ vs } H\_A: \sigma^2 \neq \sigma\_1^2 \text{ or alternatively } H\_0: \lambda = 1 \text{ vs } H\_A: \lambda \neq 1.$$

Suppose that a shift of size *λ* occurs in the process variance, then the corresponding random variables after the shift would be *Wi*∼*Gamma*( *ni*−<sup>1</sup> <sup>2</sup> , 2*λ*), *i* = *κ*, *κ* + 1, ··· , *m* distributed (see (4)). If there is no shift in the variance, *λ* = 1, and it follows that *Wi*∼*Gamma*( *ni*−<sup>1</sup> <sup>2</sup> , 2), *i* = 0, 1, ··· , *m*. Thus the hypothesis being investigated can be changed

depending on the choice of the scale parameter of the specific gamma random variables. From (5) it follows that the series of statistics that forms the building blocks of the process are given by

$$\mathcal{U}\_r^\* = \frac{\left(\frac{\sum\_{i=r}^n (n\_i - 1)S\_i^2}{\lambda r^2 \sum\_{i=r}^n (n\_i - 1)}\right)}{\left(\frac{\sum\_{i=0}^{r-1} (n\_i - 1)S\_i^2}{\sigma^2 \sum\_{i=0}^{r-1} (n\_i - 1)}\right)} \equiv \frac{\frac{\sum\_{i=r}^n W\_i}{\sum\_{i=r}^n (n\_i - 1)}}{\frac{\sum\_{i=0}^{r-1} W\_i}{\sum\_{i=0}^{r-1} (n\_i - 1)}}, r = 1, 2, \dots, m - 1, m,\tag{6}$$

where *Wi*∼*Gamma*( *ni*−<sup>1</sup> <sup>2</sup> , 2) for *<sup>i</sup>* <sup>=</sup> 1, 2, ··· ,*<sup>r</sup>* <sup>−</sup> 1 and *Wi*∼*Gamma*( *ni*−<sup>1</sup> <sup>2</sup> , 2*λ*) for *i* = *r*,*r* + 1, ··· , *m*.

In essence, (6) implies that the sample variances before the potential shift are pooled together, and the sample variances after the potential shift are pooled together. The numerator of the statistic at time *r* is the average, weighted by each statistic's degrees of freedom, of all the sample variances between and including samples *r* and *m*, while the denominator is the corresponding weighted average of all the sample variances between and including samples 0 and *r* − 1; this is graphically presented in Figure 2. Thus *U*<sup>∗</sup> *<sup>r</sup>* is the test statistic that is typically used to test the equality of two population variances from a normal distribution.

**Figure 2.** Building blocks of new distribution.

From (3), (4) and (6), it follows that if no shift has occurred in the process variance, i.e., *λ* = 1, each statistic *U*∗ *<sup>r</sup>* is univariate *F* distributed with ∑*<sup>m</sup> <sup>i</sup>*=*r*(*ni* <sup>−</sup> <sup>1</sup>) and <sup>∑</sup>*r*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> (*ni* − 1) degrees of freedom, respectively.

The reasoning behind the critical values that would indicate whether *λ* = 1 is justified by inspecting the sequence of statistics in (6). Suppose that an increase (*λ* > 1) in the process variance has occurred from sample *r* = *κ* onward, then:


The value that this statistic assumes also has a high likelihood of being the maximum value of all the *U*∗ *<sup>r</sup>* ,*r* = 1, 2, ··· , *m* − 1, *m* statistics.

• As such, the most reasonable method of calculating the critical value to detect an upwards shift in the process variance is to calculate the maximum order statistic of the charting statistics *U*∗ *<sup>r</sup>* ,*r* = 1, 2, ··· , *m* − 1, *m*, (under the null hypothesis) and to set the critical value equal to some percentile of the distribution of the maximum order statistic.

Using a similar but inverted argument, it can be justified that the critical value of the control chart should be set equal to some percentile of the minimum order statistic of the charting statistics, under the null hypothesis of no shift having occurred, if the detection of a downward shift in the process variance is of concern. Due to space limitations, the minimum order statistics are not presented in this article.

To simplify matters going forward and for notational purposes we omit the factors ∑*<sup>m</sup> <sup>i</sup>*=*r*(*ni* <sup>−</sup> <sup>1</sup>) and <sup>∑</sup>*r*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> (*ni* − 1) in (6), and drop the superscript \*, and therefore the statistics of interest become

$$\mathcal{U}\_{l} = \frac{\sum\_{i=r}^{m} \mathcal{W}\_{i}}{\sum\_{i=0}^{r-1} \mathcal{W}\_{i}} , r = 1, 2, \dots, m - 1, m \,\tag{7}$$

where *Wi* <sup>∼</sup> *Gamma*(*α<sup>i</sup>* <sup>&</sup>gt; 0, *<sup>β</sup><sup>i</sup>* <sup>&</sup>gt; <sup>0</sup>), *<sup>i</sup>* <sup>=</sup> 0, 1, ··· , *<sup>m</sup>* and independent. Since *<sup>α</sup><sup>i</sup>* <sup>=</sup> *ni*−<sup>1</sup> <sup>2</sup> and *β<sup>i</sup>* = 2*λ*, the shape parameter is related to the sample size of the *ith* sample, and the scale parameter is related to the underlying distribution's variance. The theoretical focus here is based on the statistics in (7), whereas the statistics in (6) are those that are practically applicable and is the basis in the simulation study in Section 3.

Constructing statistics using ratios of random variables as in (7) is of practical interest in many areas of science. Ref. [2] studied and derived the joint density functions of ratios of Rayleigh, Rician, Nakagami-*m*, and Weibull random variables; [3] approached the ratios of generalised gamma variables via exact- and near exact solutions, and [4] derived closed-form expressions for the ratio of independent non-identically distributed variables from an *α*-*μ* distribution which have applications in the performance analysis of wireless communication systems.

The proposed model in this paper will be compared to the model of [5] in Section 3, and is described here for the convenience of the reader. If *r* = 2, that the bivariate joint probability density function of the statistics in (10) is given by

$$\begin{array}{rcl} \log(t\_1, t\_2) &=& \frac{\left(\beta\_0^{a\_1+a\_2}\beta\_1^{a\_0+a\_2}\beta\_2^{a\_0+a\_1}\right)\Gamma(a\_0+a\_1+a\_2)}{\Gamma(a\_0)\Gamma(a\_1)\Gamma(a\_2)}(t\_1)^{a\_1-1}(t\_2)^{a\_2-1}(1+t\_1)^{a\_2} \\ & \times & \left(\beta\_1\beta\_2 + \beta\_0\beta\_2 t\_1 + \beta\_0\beta\_1(1+t\_1)t\_2\right)^{-a\_0-a\_1-a\_2} \end{array} \tag{8}$$

where *t*1, *t*<sup>2</sup> > 0, *αi*, *β<sup>i</sup>* > 0 for *i* = 0, 1, 2,, *t*<sup>1</sup> = *<sup>w</sup>*<sup>1</sup> *<sup>w</sup>*<sup>0</sup> and *<sup>t</sup>*<sup>2</sup> <sup>=</sup> *<sup>w</sup>*<sup>2</sup> (*w*0+*w*1), and <sup>Γ</sup>(·) is the gamma function [6].

Refs. [5,7] proposed a beta distribution that is used to detect a shift in the process variance which is based on the Q chart that was developed by [8], and as such the series of comparisons of sample statistics they made were

$$\begin{array}{ll} S\_1^2 & \text{is compared with} & S\_0^2\\ S\_2^2 & \text{is compared with} & S\_{0\prime}^2 S\_1^2\\ & \text{and so forth until} &\\ S\_m^2 & \text{is compared with} & S\_{0\prime}^2 S\_{1\prime}^2 \cdot \cdots \cdot S\_{m-1}^2. \end{array} \tag{9}$$

Using a similar approach to the method described earlier the statistics can be given as

$$T\_r = \frac{W\_r}{\sum\_{i=0}^{r-1} W\_i}, r = 1, 2, \dots, m - 1, m\_r \tag{10}$$

which is graphically presented in Figure 3—with *Wi* defined in (3) and (4).

**Figure 3.** Building blocks of [7].

Based on the work of [5,9] provided insight for detecting the change in the parameter structure if the underlying process is multivariate normal.

#### *1.3. Outline of Paper*

In Section 2, the bivariate joint probability density function which emanates from (7) is derived, accompanied by an exploratory shape analysis. This is followed by the derivation of the marginal probability density functions, the product moment as well as the maximum order statistic of the distribution. In Section 3 the performance of the model that this article proposes is compared to the Q chart studied by [5], this will be conducted through a simulation study. Tables of simulated values for the 95th percentiles of the maximum order statistics of the sets of random variables in (6) and (7) are provided in Section 3, for varying parameter choices, to enable practical application of the proposed model. The values in these tables are corroborated through numerical integration of the derived expressions for the maximum order statistics. The Appendices A and B contains proofs of the obtained results as well as the positioning of this distribution among other often considered bivariate beta models.

#### **2. Proposed Model**

In this section, the joint probability density function of the random variables *U*<sup>1</sup> and *U*<sup>2</sup> (see (7) when *r* = 2) is derived, followed by a shape analysis, the derivation of the marginal probability density functions, the product moment as well as the maximum order statistic. A brief review of this new candidate with respect to its partners is provided in the Appendices A and B—which provides additional insight for modelling as well as expressions with closed form.

#### *2.1. Bivariate Probability Density Function*

**Theorem 1.** *Let Wi be independent gamma random variables with parameters <sup>α</sup><sup>i</sup>* = *ni*−<sup>1</sup> <sup>2</sup> > 0, *<sup>β</sup><sup>i</sup>* = <sup>2</sup>*<sup>λ</sup>* > <sup>0</sup> *for <sup>i</sup>* = 0, 1, 2. *Let <sup>U</sup>*<sup>1</sup> = *<sup>W</sup>*1+*W*<sup>2</sup> *<sup>W</sup>*<sup>0</sup> *and <sup>U</sup>*<sup>2</sup> <sup>=</sup> *<sup>W</sup>*<sup>2</sup> *<sup>W</sup>*0+*W*<sup>1</sup> *then the joint probability density function of U*<sup>1</sup> *and U*<sup>2</sup> *is given by*

$$f(u\_1, u\_2) = \begin{cases} \frac{\beta\_0^{a\_1 + a\_2} \beta\_1^{a\_0 + a\_2} \beta\_2^{a\_0 + a\_1}}{\Gamma(a\_0) \Gamma(a\_1) \Gamma(a\_2)} (u\_1 - u\_2)^{a\_1 - 1} u\_2^{a\_2 - 1} (1 + u\_1)^{a\_2} (1 + u\_2)^{a\_0} \\ \times \quad (\beta\_1 \beta\_2 (1 + u\_2) + \beta\_0 \beta\_2 (u\_1 - u\_2) + \beta\_0 \beta\_1 u\_2 (1 + u\_1))^{-a\_0 - a\_1 - a\_2} \\ \text{where } u\_1 > u\_2 > 0. \end{cases} \tag{11}$$

## *2.2. Shape Analysis*

In this section a shape analysis is conducted for the joint probability density function (11). A standard set of parameters has been chosen as a baseline. The parameters are chosen to be *α*<sup>0</sup> = *α*<sup>1</sup> = *α*<sup>2</sup> = 5 and *β*<sup>0</sup> = *β*<sup>1</sup> = *β*<sup>2</sup> = 2, in other words, a process where all three samples consist of 5 × 2 + 1 = 11 observations, and where no shift has occurred in the process variance. Some of the parameters will then be varied from this baseline in order to investigate the effect that a change in the specific parameters will have on the general

shape of the joint probability density function. Note that the change in some parameters will be large - so large that they lose practical realism; this is conducted to emphasise and investigate the general change in the shape, and is not meant to be an indication of the practical applications of the joint probability density function. The functions will only be plotted for *u*<sup>1</sup> ∈ [0, 3] , *u*<sup>2</sup> ∈ [0, 3].

In Figure 4 it can be seen what effect increasing the sample sizes (while keeping them equal) has on the joint probability density function. It is seen that increasing the sample sizes also increases the height of the peak of the probability density function. Larger sample sizes also shrink the length and width of the "tails" of the joint probability density function. In essence, the larger the sample sizes (*ni* = 11, *ni* = 51, *ni* = 101), the smaller the domain on which the function has non-trivial values.

**Figure 4.** Equal sample sizes, no shift in the variance.

Figure 5 below demonstrates that a sustained increase in the process variance, irrespective of size, minimally affects the general shape and location of the joint probability density function, but does affect the height of the probability density function's peak. In the below example, the shift in the process variance occurs at time 1, and as one would hope and expect, the joint probability density function relies heavily on the value of the statistic at time 1, *U*1. A similar effect, where the joint probability density function relies heavily on the value of *U*<sup>2</sup> is seen when the shift occurs at time 2.

**Figure 5.** Equal sample sizes, increase in variance at time 1.

#### *2.3. Marginal Probability Density Functions*

**Theorem 2.** *Assume that* (*U*1, *U*2) *has the joint probability density function in (11), then the marginal probability density function of U*<sup>1</sup> *is given by*

$$\begin{array}{l} f(u\_1) \\ = \frac{\left(\beta\_0^{a\_1+a\_2}\beta\_1^{a\_0+a\_2}\beta\_2^{a\_0+a\_1}\right)\Gamma(a\_0+a\_1+a\_2)}{\Gamma(a\_0)\Gamma(a\_1)\Gamma(a\_2)} (\beta\_1\beta\_2 + \beta\_0\beta\_2 u\_1)^{-a\_0-a\_1-a\_2} \\ \times (1+u\_1)^{a\_2} \sum\_{k=0}^{\infty} \left[ \left(\frac{a\_0}{k!(a\_0-k)!}\right) (u\_1)^{k+a\_1+a\_2-1} \frac{\Gamma(a\_1)\Gamma(k+a\_2)}{\Gamma(k+a\_1+a\_2)} \\ \times \,\_2F\_1\left(a\_0+a\_1+a\_2,k+a\_2;k+a\_1+a\_2; -\frac{u\_1(\beta\_1\beta\_2-\beta\_0\beta\_2+\beta\_0\beta\_1+\beta\_0\beta\_1 u\_1)}{\beta\_1\beta\_2+\beta\_0\beta\_2 u\_1}\right) \right] \end{array} \tag{12}$$

*where <sup>u</sup>*<sup>1</sup> <sup>&</sup>gt; <sup>0</sup>*, <sup>α</sup>*0, *<sup>α</sup>*1, *<sup>α</sup>*<sup>2</sup> <sup>∈</sup> <sup>Z</sup>*, <sup>β</sup>*1(*β*<sup>2</sup> <sup>+</sup> *<sup>β</sup>*0(<sup>1</sup> <sup>+</sup> *<sup>u</sup>*1)) <sup>&</sup>gt; *<sup>β</sup>*0*β*2*,* ' ' '−*u*1(*β*1*β*2−*β*0*β*2+*β*0*β*1+*β*0*β*1*u*1) *β*1*β*2+*β*0*β*2*u*<sup>1</sup> ' ' ' < 1, *and the marginal probability density function of U*<sup>2</sup> *is given by*

$$\begin{cases} f\left(\mu\_{2}\right) \\ = \frac{\left(\mu\_{0}^{a\_{1}+a\_{2}}\mu\_{1}^{a\_{0}+a\_{2}}\mu\_{2}^{a\_{0}+a\_{1}}\right)\Gamma(a\_{0}+a\_{1}+a\_{2})}{\Gamma(a\_{0})\Gamma(a\_{2})}(1+\mu\_{2})^{a\_{0}}\sum\_{k=0}^{\infty}\left[\frac{a\_{2}!}{k!(a\_{2}-k)!}\right] \\ \times\sum\_{l=0}^{\infty}\left[(-1)^{l}\binom{a\_{0}+a\_{1}+a\_{2}+l-1}{l}(\beta\_{0}\beta\_{2}+\beta\_{0}\beta\_{1}\nu\_{2})^{-a\_{0}-a\_{1}-a\_{2}-l}\right. \\ \times\left(\beta\_{1}\beta\_{2}+\beta\_{1}\beta\_{2}\nu\_{2}-\beta\_{0}\beta\_{2}\nu\_{2}+\beta\_{0}\beta\_{1}\nu\_{2}\right)^{l}u\_{2}^{k-a\_{0}-l-1}\frac{\Gamma(a\_{0}+a\_{2}+l-k)}{\Gamma(a\_{0}+a\_{1}+a\_{2}+l-k)}\right]\end{cases} \tag{13}$$

*where <sup>u</sup>*<sup>2</sup> <sup>&</sup>gt; <sup>0</sup>*, <sup>α</sup>*0, *<sup>α</sup>*1, *<sup>α</sup>*<sup>2</sup> <sup>∈</sup> <sup>Z</sup>*, <sup>α</sup>i*, *<sup>β</sup><sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> *for <sup>i</sup>* <sup>=</sup> 0, 1, 2 *and* <sup>2</sup>*F*1(,) *is the Gauss hypergeometric function ([6] p. 1005). Note that in (12) if <sup>α</sup>*<sup>0</sup> <sup>∈</sup> {1, 2, ··· } *the sum changes from* <sup>∑</sup><sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *to* <sup>∑</sup>*α*<sup>0</sup> *k*=0 *(See [6] p. 25, Equations (1).110 and 1.111). Similarly if α*<sup>2</sup> ∈ {1, 2, ··· } *the sum changes from* ∑<sup>∞</sup> *<sup>k</sup>*=<sup>0</sup> *to* <sup>∑</sup>*α*<sup>2</sup> *<sup>k</sup>*=<sup>0</sup> *in (13).*

If *β*<sup>0</sup> = *β*<sup>1</sup> = *β*<sup>2</sup> it follows that:

$$\begin{split} f(u\_1) &= \begin{array}{c} \frac{\Gamma(a\_0 + a\_1 + a\_2)}{\Gamma(a\_0)\Gamma(a\_1)\Gamma(a\_2)}(1 + u\_1)^{-a\_0 - a\_1} \sum\_{k=0}^{\infty} \left[ \left( \frac{a\_0!}{k!(a\_0 - k)!} \right) (u\_1)^{k + a\_1 + a\_2 - 1} \right] \\ &\times \quad \frac{\Gamma(a\_1)\Gamma(k + a\_2)}{\Gamma(k + a\_1 + a\_2)} \,\_2F\_1(a\_0 + a\_1 + a\_2, k + a\_2; k + a\_1 + a\_2; -u\_1) \end{array}$$

where 0 < *u*<sup>1</sup> < 1, and

$$\begin{aligned} f(u\_2) &= \frac{\Gamma(a\_0 + a\_1 + a\_2)}{\Gamma(a\_0)\Gamma(a\_2)} (1 + u\_2)^{a\_0} \sum\_{k=0}^{\infty} \left[ \frac{a\_2!}{k!(a\_2 - k)!} \sum\_{j=0}^{\infty} \begin{bmatrix} (-1)^j \binom{a\_0 + a\_1 + a\_2 + j - 1}{j} \\ \end{bmatrix} \right] \\ &\times \quad (1 + u\_2)^{-a\_0 - a\_1 - a\_2} u\_2^{k - a\_0 - j - 1} \frac{\Gamma(a\_0 + a\_2 + j - k)}{\Gamma(a\_0 + a\_1 + a\_2 + j - k)} \bigg] \end{aligned}$$

where *u*<sup>2</sup> > 0.

*2.4. Product Moment and Order Statistics*

**Theorem 3.** *Assume that* (*U*1, *U*2) *has the joint probability density function (11), then the product moment of U*<sup>1</sup> *and U*<sup>2</sup> *is given by*

$$\begin{array}{c} E\left(\mathcal{U}\_{1}^{r}\mathcal{U}\_{2}^{s}\right) \\ = \sum\_{p=0}^{r} (rp) \frac{\left(\rho\_{0}^{\kappa\_{1}-p-s}\rho\_{1}^{-\kappa\_{1}}\rho\_{2}^{p+s}\right)\Gamma(\kappa\_{2}+p+s)\Gamma(a\_{0}+\kappa\_{1}-p-s)\Gamma(a\_{1}+r-p)\Gamma(a\_{0}-r)}{\Gamma(a\_{0})\Gamma(a\_{1})\Gamma(a\_{2})\Gamma(a\_{0}+a\_{1}-p)} \\ \quad \times \,\_2F\_{1}\left(\mathfrak{a}\_{0}+\mathfrak{a}\_{1}-p-s,\mathfrak{a}\_{1}+r-p;\mathfrak{a}\_{0}+\mathfrak{a}\_{1}-p;1-\frac{\beta\_{0}\beta\_{2}}{\beta\_{1}\beta\_{2}}\right) \end{array} \tag{14}$$
  $where \ r,\mathbf{s}\in\{0,1,2,\cdots\}, \mathfrak{a}\_{0}+\mathfrak{a}\_{1}>r+s,\mathfrak{a}\_{0}>\mathfrak{a}\_{1},\mathfrak{a}\_{0}+\mathfrak{a}\_{1}>p,\text{and}\left|1-\frac{\beta\_{0}\beta\_{2}}{\beta\_{1}\beta\_{2}}\right|<1.$ 

The maximum order statistics are of importance in detecting whether a shift in the process variance does indeed occur, as was discussed in Section 1 and will be demonstrated in Section 3. Although a closed form expression for the maximum order statistic is not tractable in a closed form, an expression is provided that can be implemented to calculate numerical values.

Assume that (*U*1, *U*2) has the joint probability density function (11), then the maximum order statistic of *U*<sup>1</sup> and *U*<sup>2</sup> can be determined either by

$$\begin{array}{rcl}P(\max(\mathsf{U}I\_1, \mathsf{U}I\_2) < z) & = & P(\mathsf{U}I\_1 < z) \\ & = & \int\_0^z f(u\_1) du\_1 \end{array} \tag{15}$$

when *α*0, *α*1, *α*<sup>2</sup> ∈ (since *u*<sup>1</sup> > *u*2).

#### **3. Comparison Study and Discussion**

Deriving the order statistics of the series of statistics in (6) and (7) is a complex task since they are neither independent nor identically distributed, (see [10]). Hence values for the 95th percentiles of the maximum order statistics are simulated for varying numbers of samples (*m* equal to 1, 4, 9, 14, 19, 24, 29, 49, 99 and 499) and sample sizes (*n* equal to 2, 5, 10, 15, 20, 25, 30, 50, 100 and 500) in Section 3.1.

In Section 3.1, properties of (7) are also studied when no shift in the variance occurs, this is imperative since the control limits of a control chart are constructed under the null hypothesis that no shift has occurred, or alternatively given that the process is in control. The in control properties that are investigated for both the Q chart and the new proposed model (6) are where the maximum order statistic is most likely to occur when no shift has occurred in the process variance. Practically, this is a very important question since, if the maximum order statistic consistently occurs at roughly the same place in the sequence of samples, it implies that the distribution should be treated with added suspicion if it indicates that a shift in the process occurs at this location.

In Section 3.2, the out of control performance of the newly proposed distribution is compared with the Q chart model investigated by [5]. The probability that the respective control charts will detect a shift in the process variance is investigated for differing sizes of shifts in the process variance. The comparison is made for varying numbers of samples as well as sample sizes.

#### *3.1. Comparison When the Process Is in Control*

The location of the maximum order statistic when the process is in control is investigated using graphs. For brevity's sake only one possible combination (*m* = 29, *n* = 15) of the number of samples and sample sizes mentioned above is included in this paper. All of them, however, lead to the same general conclusion.

As can be seen from Figure 6 the [5] distribution's maximum order statistic occurs most often at the first statistic, with the probability of the maximum occurring at subsequent statistics steadily decreasing. This implies that the Q chart becomes more stable as the process progresses, which makes practical sense since each subsequent statistic includes more of the sample data. The newly proposed distribution's maximum order statistic occurs most often at the first statistic, and second-most often at the last statistic. This due to the way in which the statistics of the distribution are constructed (see (6)). This implies that while our proposed model may detect shifts at the ends of the samples, signals received at these locations should be treated with a bit of skepticism.

**Figure 6.** Location of maximum order statistics: *m* = 29 and *n* = 15.

In Table 1 the 95th percentiles of the maximum order statistics of the newly proposed distributions are simulated (to the third decimal) using Monte Carlo simulation. In other words *z* in the equation *P max U*∗ <sup>1</sup> , *U*<sup>∗</sup> <sup>2</sup> , ··· , *U*<sup>∗</sup> *m* < *z* = 0.95. Similarly the 95th percentiles of the maximum order statistics of (6) are simulated in Table 2.


**Table 1.** 95th percentiles of the maximum order statistics of *U*∗ <sup>1</sup> , *U*<sup>∗</sup> <sup>2</sup> , ··· , *U*<sup>∗</sup> *m*.


**Table 2.** 95th percentiles of the maximum order statistics of *U*1, *U*2, ··· , *Um*.

Note that:


**Table 3.** Simulated and theoretical 95th percentiles of the maximum order statistics of *U*1, *U*2.


#### *3.2. Comparisons When the Process is Out of Control*

In this section, the proposed model's potential to detect shifts in compared with that of the Q chart form investigated by [5]. The probability of signaling that a shift in the process variance has occurred depends on a few variables. In this paper, these variables are: the number of samples, the sample size of the samples, where in the process the shift in the process variance occurs, and the size of the shift. The figures in this section take all of these parameters into account.

In each of the following figures, the probability of signaling a shift in the variance is displayed as a function of the size of the shift, where the shift size *λ*, ranges from *λ* = 1 (no shift) to *λ* = 5 (a 500% increase in the process variance). Many different combinations of the number of samples, the sample sizes, as well as the locations of the shift were tested; however, only the graphs that illustrate key findings are included in this paper. The chosen parameters that were simulated are: number of samples (*m*) equal to 10, 20 and 30, the sample sizes (*n*) equal to 2, 5, 10 and 20, and the location of the shift in the process variance (*κ*) occurring at (roughly, due to integer sample numbers) 25%, 50% and 75% of the way through the samples.

By simulating graphs such as those in Figures 7–9, certain conclusions can be reached about the properties and efficacy of the two competing models ((5) and (9)):

	- 1. In the above graphs, each plotted point was simulated 500,000 times during the Monte Carlo process. For all the *n* = 2 graphs, the points vary erratically between each 0.05 increases in the shift size, and thus even for a large number of simulations the process cannot be described as "stable".
	- 2. The Q chart seems to be completely incapable of detecting an increase in the process variance when *n* = 2, with the probability of detecting a shift remaining at roughly 5%, irrespective of the size of the shift.
	- 3. While the new model's probability of detecting a shift does increase as the size of the shift increases, it remains relatively low, at roughly 7% to 10%, just marginally higher than the 5% chance when the process is actually IC. This implies that while it might be theoretically possible to implement the new model for samples sizes of 2, it would likely not be a practically useful technique.
	- 4. The new model's probability of detecting a shift does not increase as the number of samples increases, as would be expected (and as is the case for the other choices of *n*)
	- 1. For small sample sizes (*n* = 5), the proposed model outperforms the Q chart for small shifts in the process variance, whereas the Q chart performs better for larger shifts.
	- 2. The Q chart performs at its best when the shift in the process variance occurs late in the series of samples.
	- 3. For larger sample sizes (*n* = 20) the proposed model outperforms the Q chart when the shift in the process variance occurs early, but when the shift occurs roughly half way through the series of samples, or further, the performance of the two methods are very similar.

*m* = 9, *n* = 2, *κ* = 7 *m* = 29, *n* = 2, *κ* = 15

**Figure 7.** Probability of detecting a shift when *n* = 2.

*m* = 9, *n* = 5, *κ* = 3 *m* = 9, *n* = 20, *κ* = 7

**Figure 8.** Probability of detecting a shift when *m* = 9.

*m* = 19, *n* = 10, *κ* = 15 *m* = 29, *n* = 20, *κ* = 22

**Figure 9.** Probability of detecting a shift when *κ* ≈ 0.75 m.

#### **4. Concluding Remarks**

In this paper a new bivariate beta type distribution is proposed that can be utilised to detect a shift in a process's variance when the underlying process follows a normal distribution. The proposed model compares favourably with the Q chart model studied by [5] in most situations; especially when the number of samples is small, and when the process variance experiences a change early on in the series of samples. Future work can include focus on (i) when a shift occurs within a sample, (ii) expanding underlying distributional assumptions to that of the class of scale mixtures to consider departures from normality (see [11]), and (iii) the multivariate setup, of which we propose the probability density function in the following theorem and a proof outlined in the Appendices A and B.

**Theorem 4.** *Let Wi be independent gamma random variables with parameters* (*α<sup>i</sup>* > 0, *β<sup>i</sup>* > 0) *for <sup>i</sup>* <sup>=</sup> 0, 1, 2, ··· , *<sup>m</sup>*. *Let Ur* <sup>=</sup> <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=*<sup>r</sup> Wi* ∑*r*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> *Wi* ,*r* = 1, 2, ··· , *m* − 1, *m*, *then the joint probability density function of U*1, *U*2, ··· , *Um is given by*

$$\begin{split} & \quad f(u\_1, u\_2, \ldots, u\_m) \\ &= \frac{\prod\_{i=1}^{m-1} \left[ (u\_i - u\_{i+1})^{u\_i} - 1 \right] (u\_m)^{m-1} \Gamma \left( \sum\_{i=0}^m a\_i \right)}{\prod\_{i=0}^m \left[ \beta\_i^{a\_i} \Gamma(a\_i) \right]} \\ & \times (1 + u\_1)^{\sum\_{i=2}^m a\_i} \prod\_{i=2}^m \left[ (1 + u\_i)^{-u\_{i-1} - u\_i} \right] \\ & \times \left( \frac{1}{\beta\_0} + \frac{(u\_1 - u\_2)}{\beta\_1 (1 + u\_2)} + \sum\_{i=2}^{m-1} \left[ \frac{(1 + u\_1)(u\_i - u\_{i+1})}{\beta\_1 (1 + u\_i)(1 + u\_{i+1})} \right] + \frac{(1 + u\_1) u\_m}{\beta\_m (1 + u\_m)} \right)^{-\sum\_{i=0}^m a\_i} \end{split} \tag{16}$$

*where u*<sup>1</sup> > *u*<sup>2</sup> > ··· > *um* > 0.

**Author Contributions:** Conceptualization, S.W.H. and A.B.; methodology, S.W.H., A.B. and P.A.M.; software, P.A.M.; validation, S.W.H., A.B., J.T.F. and P.A.M.; formal analysis, P.A.M.; investigation, A.B., J.T.F. and P.A.M.; writing—original draft preparation, P.A.M.; writing—review and editing, A.B., J.T.F. and P.A.M.; supervision, S.W.H. and A.B.; funding acquisition, A.B. and J.T.F. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was based upon research supported in part by the National Research Foundation (NRF) of South Africa, grant RA201125576565, nr 145681; NRF ref. SRUG190308422768 nr. 120839; the RDP296/2022 grant from the University of Pretoria, South Africa, as well as the Centre of Excellence in Mathematical and Statistical Sciences, based at the University of the Witwatersrand, South Africa. The opinions expressed and conclusions arrived at are those of the authors and are not necessarily to be attributed to the NRF.

**Acknowledgments:** The authors acknowledge the StatDisT group based at the University of Pretoria, Pretoria, South Africa. Furthermore, the authors thank the editor as well as three anonymous reviewers for their insightful contributions which led to an improved manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Positioning**

The aim of this section is to show the relationships between some of the most commonly used bivariate beta distributions, and to relate these distributions to the bivariate distribution derived by [5] and the one proposed by this article. In this section the statistics in the relationships are constructed out of chi-square random variables, not gamma, as is conducted throughout the rest of this article. This is conducted for the sake of simplicity; however, the relationships hold regardless.

Let *<sup>Y</sup>*<sup>1</sup> ∼ *<sup>χ</sup>*2(*α*),*Y*<sup>2</sup> ∼ *<sup>χ</sup>*2(*β*) and *<sup>Y</sup>*<sup>3</sup> ∼ *<sup>χ</sup>*2(*γ*). Then:

• If *Q*<sup>1</sup> = *<sup>Y</sup>*<sup>1</sup> *<sup>Y</sup>*1+*Y*2+*Y*<sup>3</sup> and *<sup>Q</sup>*<sup>2</sup> <sup>=</sup> *<sup>Y</sup>*<sup>2</sup> *Y*1+*Y*2+*Y*<sup>3</sup> . Then the joint distribution of *Q*<sup>1</sup> and *Q*<sup>2</sup> is called a bivariate beta type I distribution. In Figure A1 this will be denoted alternatively as (*Q*1, *Q*2) ∼ *BI*(*α*, *β*, *γ*). The multivariate generalisation of this distribution is called the Dirichlet type I distribution (see [12,13]).


Relationships between these models are graphically represented in Figure A1.

**Figure A1.** Relationships between several bivariate beta distributions.

#### **Appendix B. Proofs**

**Proof of Theorem 1.** The joint probability density function of *W*0, *W*1, *W*<sup>2</sup> is given by

$$f(w\_0, w\_1, w\_2) = \frac{1}{\beta\_0^{a\_0} \beta\_1^{a\_1} \beta\_2^{a\_2} \Gamma(a\_0) \Gamma(a\_1) \Gamma(a\_2)} \left( w\_0^{a\_0 - 1} e^{-\frac{w\_0}{\beta\_0}} \right) \left( w\_1^{a\_1 - 1} e^{-\frac{w\_1}{\beta\_1}} \right) \left( w\_2^{a\_2 - 1} e^{-\frac{w\_2}{\beta\_2}} \right) \tag{A1}$$

where *<sup>w</sup>*0, *<sup>w</sup>*1, *<sup>w</sup>*<sup>2</sup> <sup>&</sup>gt; 0. Let *<sup>U</sup>*<sup>0</sup> <sup>=</sup> *<sup>W</sup>*0, then *<sup>W</sup>*<sup>1</sup> <sup>=</sup> *<sup>U</sup>*0(*U*1−*U*2) (1+*U*2) and *<sup>W</sup>*<sup>2</sup> <sup>=</sup> *<sup>U</sup>*0*U*2(1+*U*1) (1+*U*2) , and

$$\begin{array}{l} f\left(u\_{0},u\_{1},u\_{2}\right) \\ \hline \quad = \frac{1}{\beta\_{0}^{a\_{0}}\beta\_{1}^{a\_{1}}\beta\_{2}^{a\_{2}}\Gamma(a\_{0})\Gamma(a\_{1})\Gamma(a\_{2})}(u\_{1}-u\_{2})^{a\_{1}-1}u\_{2}^{a\_{2}-1}(1+u\_{1})^{a\_{2}}(1+u\_{2})^{-a\_{1}-a\_{2}}\\ \times u\_{0}^{(a\_{0}+a\_{1}+a\_{2})-1}\exp\left(\frac{-u\_{0}(\beta\_{1}\beta\_{2}(1+u\_{2})+\beta\_{0}\beta\_{2}(u\_{1}-u\_{2})+\beta\_{0}\beta\_{1}u\_{2}(1+u\_{1}))}{\beta\_{0}\beta\_{1}\beta\_{2}(1+u\_{2})}\right). \end{array} \tag{A2}$$

By integrating (A2) with respect to *u*0, and rearranging the terms it follows that

$$\begin{array}{l} f(\boldsymbol{u}\_{1},\boldsymbol{u}\_{2}) \\ = \frac{1}{\beta\_{0}^{a\_{0}}\beta\_{1}^{a\_{1}}\beta\_{2}^{a\_{2}}\Gamma(a\_{0})\Gamma(a\_{1})\Gamma(a\_{2})}(\boldsymbol{u}\_{1}-\boldsymbol{u}\_{2})^{a\_{1}-1}\boldsymbol{u}\_{2}^{a\_{2}-1}(1+\boldsymbol{u}\_{1})^{a\_{2}}(1+\boldsymbol{u}\_{2})^{-a\_{1}-a\_{2}} \\ \times \int\_{0}^{\infty}\boldsymbol{u}\_{0}^{(a\_{0}+\boldsymbol{u}\_{1}+\boldsymbol{u}\_{2})-1} \exp\left(\frac{-\boldsymbol{u}\_{0}(\beta\_{1}\beta\_{2}(1+\boldsymbol{u}\_{2})+\beta\_{0}\beta\_{2}(\boldsymbol{u}\_{1}-\boldsymbol{u}\_{2})+\beta\_{0}\beta\_{1}\boldsymbol{u}\_{2}(1+\boldsymbol{u}\_{1}))}{\beta\_{0}\beta\_{1}\beta\_{2}(1+\boldsymbol{u}\_{2})}\right) d\boldsymbol{u}. \end{array} \tag{A.3}$$

By applying [6] p. 346, Equation (3).381.4, to (A3), the result in (11) follows.

**Proof of Theorem 2.** From (11), by rearranging the terms, and applying [6] p. 25, Equations (1).110 and 1.111, it follows that

$$\begin{cases} f(u\_1) \\ = \frac{\left(\beta\_0^{a\_1+a\_2}\beta\_1^{a\_0+a\_2}\beta\_2^{a\_0+a\_1}\right)\Gamma(a\_0+a\_1+a\_2)}{\Gamma(a\_0)\Gamma(a\_1)\Gamma(a\_2)}(1+u\_1)^{a\_2} \\ \times \left(\beta\_1\beta\_2-\beta\_0\beta\_2+\beta\_0\beta\_1+\beta\_0\beta\_1 u\_1\right)^{-a\_0-a\_1-a\_2}\sum\_{k=0}^{\infty}\left(\frac{a\_0!}{k!(a\_0-k)!}\right) \\ \times \int\_0^{u\_1} (u\_1-u\_2)^{a\_1-1}u\_2^{k+a\_2-1}\left(\frac{\beta\_1\beta\_2+\beta\_0\beta\_2 u\_1}{\beta\_1\beta\_2-\beta\_0\beta\_2+\beta\_0\beta\_1+\beta\_0\beta\_1 u\_1}+u\_2\right)^{-a\_0-a\_1-a\_2} du\_2. \end{cases} \tag{A4}$$

By applying [6] p. 317, Equation (3). 197.8, to (A4), the result in (12) follows. From (11), by rearranging the terms, and applying [6] p. 25, Equations (1).110 and 1.111 twice, it follows that

$$\begin{split} &f(u\_{2}) \\ &= A(u\_{1},u\_{2}) \int\_{u\_{2}}^{\infty} (u\_{1}-u\_{2})^{a\_{1}-1} (1+u\_{1})^{a\_{2}} \left(1+\frac{u\_{1}(\beta\_{0}\beta\_{2}+\beta\_{0}\beta\_{1}u\_{2})}{\beta\_{1}\beta\_{2}+\beta\_{1}\beta\_{2}u\_{2}-\beta\_{0}\beta\_{2}u\_{2}+\beta\_{0}\beta\_{1}u\_{2}}\right)^{-a\_{0}-a\_{1}-a\_{2}} du\_{1} \\ &= A(u\_{1},u\_{2}) \sum\_{k=0}^{\infty} \left[\frac{a\_{2}!}{k!(a\_{2}-k)!} \sum\_{l=0}^{\infty} \binom{-1}{l} \binom{a\_{0}+a\_{1}+a\_{2}+l-1}{l} \\ & \times \left(\frac{\beta\_{0}\beta\_{2}+\beta\_{0}\beta\_{1}u\_{2}}{\beta\_{1}\beta\_{2}+\beta\_{1}\beta\_{2}u\_{2}-\beta\_{0}\beta\_{2}u\_{2}+\beta\_{0}\beta\_{1}u\_{2}}\right)^{-a\_{0}-a\_{1}-a\_{2}-l} \int\_{u\_{2}}^{\infty} (u\_{1}-u\_{2})^{a\_{1}-1} u\_{1}^{k-a\_{0}-a\_{1}-a\_{2}-l} du\_{1} \right] \end{split} \tag{A5}$$

where

$$\begin{array}{rcl} A(\mathfrak{u}\_{1},\mathfrak{u}\_{2}) &=& \frac{\left(\beta\_{0}^{a\_{1}+a\_{2}}\beta\_{1}^{a\_{0}+a\_{2}}\beta\_{2}^{a\_{0}+a\_{1}}\right)\Gamma(a\_{0}+a\_{1}+a\_{2})}{\Gamma(a\_{0})\Gamma(a\_{1})\Gamma(a\_{2})}\mathfrak{u}\_{2}^{a\_{2}-1}(1+\mathfrak{u}\_{2})^{a\_{0}}\\ &\times & \left(\beta\_{1}\beta\_{2}+\beta\_{1}\beta\_{2}\mu\_{2}-\beta\_{0}\beta\_{2}\mu\_{2}+\beta\_{0}\beta\_{1}\mu\_{2}\right)^{-a\_{0}-a\_{1}-a\_{2}}.\end{array} \tag{A6}$$

By applying [6] p. 315, Equation (3).191.2 to (A5), the result in (13) follows.

**Proof of Theorem 3.** By using the relationships in Figure A1, and reordering the terms it follows that

$$\begin{array}{rcl}E\left(\mathcal{U}\_{1}^{r}\mathcal{U}\_{2}^{s}\right)&=&E\left(\left(T\_{1}+T\_{2}+T\_{1}T\_{2}\right)^{r}\left(T\_{2}\right)^{s}\right)\\&=&\int\_{0}^{\infty}\int\_{0}^{\infty}\left(t\_{1}+t\_{2}+t\_{1}t\_{2}\right)^{r}\left(t\_{2}\right)^{s}\mathcal{g}\left(t\_{1},t\_{2}\right)dt\_{2}dt\_{1}\\&=&\frac{\left(\int\_{0}^{t\_{0}}\beta\_{1}^{r-t\_{1}}\rho\_{2}^{s\_{0}+t\_{1}}\right)\Gamma(a\_{0}+a\_{1}+a\_{2})}{\Gamma(a\_{0})\Gamma(a\_{1})\Gamma(a\_{2})}\int\_{0}^{\infty}(1+t\_{1})^{r-a\_{0}-a\_{1}}(t\_{1})^{a\_{1}-1}\\&\times\int\_{0}^{\infty}\left(\frac{t\_{1}}{1+t\_{1}}+t\_{2}\right)^{r}(t\_{2})^{a\_{2}+s-1}\left(\frac{\beta\_{1}t\_{2}+\beta\_{0}\beta\_{2}t\_{1}}{\beta\_{0}\beta\_{1}(1+t\_{1})}+t\_{2}\right)^{-a\_{0}-a\_{1}-a\_{2}}dt\_{2}dt\_{1}.\end{array} \tag{A7}$$

By applying [6] p. 25, Equations (1).110 and 1.111 to (A8), and reordering the terms, it follows that

$$\begin{cases} \begin{aligned} &E\left(\mathcal{U}\_{1}^{\boldsymbol{r}}\mathcal{U}\_{2}\right) \\ &=\sum\_{p=0}^{r}\left(\frac{\rho\_{0}^{-a\_{0}}\rho\_{1}^{-a\_{1}}\rho\_{2}^{a\_{0}+a\_{1}}}{\Gamma(a\_{0})\Gamma(a\_{1})\Gamma(a\_{2})}\right)\int\_{0}^{\infty}(1+t\_{1})^{-a\_{0}-a\_{1}+p}(t\_{1})^{a\_{1}+r-p-1} \\ &\times\left(\frac{\beta\_{1}\beta\_{2}+\beta\_{0}\beta\_{2}t\_{1}}{\beta n\beta\_{1}(1+t\_{1})}\right)^{-a\_{0}-a\_{1}-a\_{2}}\int\_{0}^{\infty}(t\_{2})^{a\_{2}+p+s-1}\left(1+\frac{\beta\_{0}\beta\_{1}(1+t\_{1})}{\beta\_{1}\beta\_{2}+\beta\_{0}\beta\_{2}t\_{1}}t\_{2}\right)^{-a\_{0}-a\_{1}-a\_{2}}dt\_{2}dt\_{1}. \end{aligned} \tag{A8}$$

From [6] p. 315, Equation (3).194.3, it follows that (A8) may be expressed as

*E Ur* 1*U<sup>s</sup>* 2 = ∑*<sup>r</sup> <sup>p</sup>*=<sup>0</sup> (*<sup>r</sup> p*) *β* <sup>−</sup>*α*<sup>0</sup> <sup>0</sup> *<sup>β</sup>* <sup>−</sup>*α*<sup>1</sup> <sup>1</sup> *<sup>β</sup> <sup>α</sup>*0+*α*<sup>1</sup> <sup>2</sup> Γ(*α*0+*α*1+*α*2) Γ(*α*0)Γ(*α*1)Γ(*α*2) <sup>∞</sup> <sup>0</sup> (1 + *t*1) −*α*0−*α*1+*p* (*t*1) *α*1+*r*−*p*−1 × *<sup>β</sup>*1*β*2+*β*0*β*2*t*<sup>1</sup> *β*0*β*1(1+*t*1) <sup>−</sup>*α*0−*α*1−*α*2 *<sup>β</sup>*0*β*1(1+*t*1) *β*1*β*2+*β*0*β*2*t*<sup>1</sup> −*α*2−*p*−*<sup>s</sup>* × *B*(*α*<sup>2</sup> + *p* + *s*, *α*<sup>0</sup> + *α*<sup>1</sup> + *α*<sup>2</sup> − *α*<sup>2</sup> − *p* − *s*)*dt*<sup>1</sup> = ∑*<sup>r</sup> <sup>p</sup>*=<sup>0</sup> (*<sup>r</sup> p*) *β α*1−*p*−*s* <sup>0</sup> *β α*0−*p*−*s* <sup>1</sup> *β <sup>α</sup>*0+*α*<sup>1</sup> <sup>2</sup> Γ(*α*2+*p*+*s*)Γ(*α*0+*α*1−*p*−*s*) Γ(*α*0)Γ(*α*1)Γ(*α*2) × (*β*1*β*2) −*α*0−*α*1+*p*+*<sup>s</sup>* <sup>∞</sup> <sup>0</sup> (1 + *t*1) −*s* (*t*1) *α*1+*r*−*p*−1 1 + *<sup>β</sup>*0*β*<sup>2</sup> *β*1*β*<sup>2</sup> *t*1 <sup>−</sup>*α*0−*α*1+*p*+*<sup>s</sup> dt*1. (A9)

By applying [6] p. 317, Equation (3). 197.5 to (A9), the result in (14) follows.

**Proof of Theorem 4.** The joint probability density function of *Wi*, *i* = 0, 1, 2, ··· , *m* is given by

$$f(w\_0, w\_1, \dots, w\_m) = \prod\_{i=0}^m \frac{\left(w\_i^{a\_i - 1} e^{-\frac{w\_i}{\beta\_i}}\right)}{\beta\_i^{a\_i} \Gamma(a\_i)}, w\_{0\prime} w\_{1\prime} \cdot \dots \cdot w\_m > 0. \tag{A10}$$

Let *<sup>U</sup>*<sup>0</sup> <sup>=</sup> *<sup>W</sup>*0, *Ur* <sup>=</sup> <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=*<sup>r</sup> Wi* ∑*r*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> *Wi* ,*r* = 1, 2, ··· , *m* − 1, *m*. It follows that

$$\begin{array}{rcl} \mathcal{W}\_0 &=& \mathcal{U}\_0\\ \mathcal{W}\_1 &=& \frac{\mathcal{U}\_0(\mathcal{U}\_1 - \mathcal{U}\_2)}{(1 + \mathcal{U}\_2)}\\ \mathcal{W}\_r &=& \frac{\mathcal{U}\_0(1 + \mathcal{U}\_1)(\mathcal{U}\_r - \mathcal{U}\_{r+1})}{(1 + \mathcal{U}\_r)(1 + \mathcal{U}\_{r+1})}\\ \mathcal{W}\_{\mathcal{W}} &=& \frac{\mathcal{U}\_0(1 + \mathcal{U}\_1)\mathcal{U}\_m}{(1 + \mathcal{U}\_m)} \end{array}, r = 2, 3, \dots, m - 1.$$

with *<sup>J</sup>*(*w*0, ··· , *wm* <sup>→</sup> *<sup>u</sup>*0, *<sup>u</sup>*1, ··· , *um*) <sup>=</sup> *<sup>u</sup>m*(1+*u*1) *m*−1 ∏*<sup>m</sup> <sup>j</sup>*=2(1+*uj*) <sup>2</sup> . Subsequently the joint probability density function of *U*0, *U*1, *U*2, ··· , *Um* is

$$\begin{array}{l} f\left(\boldsymbol{u}\_{0},\boldsymbol{u}\_{1},\boldsymbol{u}\_{2},\ldots,\boldsymbol{u}\_{m}\right) \\ =\frac{\prod\_{i=1}^{m-1} \left[ (\boldsymbol{u}\_{i}-\boldsymbol{u}\_{i+1})^{\boldsymbol{a}\_{i}-1} \right] (\boldsymbol{u}\_{m})^{\boldsymbol{a}m-1}}{\prod\_{i=0}^{m} \beta\_{i}^{\boldsymbol{a}\_{i}} \Gamma(\boldsymbol{a}\_{i}) \bigg]} \mathbbm{u}^{\sum\_{i=0}^{m} \left[ \boldsymbol{a}\_{i} \right] -1} (1+\boldsymbol{u}\_{1}) \frac{\sum\_{i=2}^{m} \left[ \boldsymbol{a}\_{i} \right]}{\prod\_{i=2}^{m} \left[ \boldsymbol{\uppi}\_{i} \right]} \mathbbm{u}^{m}\_{1:2} \left[ \left(1+\boldsymbol{u}\_{i}\right)^{-\boldsymbol{a}\_{i-1}-\boldsymbol{a}\_{i}} \right] \\ \times \mathbbm{e}\_{0} \left( \frac{1}{\theta\_{0}} + \frac{(\boldsymbol{u}\_{1}-\boldsymbol{u}\_{2})}{\theta\_{1} \left(1+\boldsymbol{u}\_{2}\right)} + \sum\_{i=2}^{m-1} \left[ \frac{(1+\boldsymbol{u}\_{1})(\boldsymbol{u}\_{i}-\boldsymbol{u}\_{i+1})}{\theta\_{1} \left(1+\boldsymbol{u}\_{1}\right)(1+\boldsymbol{u}\_{i+1})} \right] + \frac{(1+\boldsymbol{u}\_{1})\boldsymbol{u}\_{m}}{\theta\_{m}(1+\boldsymbol{u}\_{m})} \right) \end{array} \tag{A11}$$

By applying [6] p. 346, Equation (3). 381.4 to (A11), the result in (16) is proved.

#### **References**

