Next Article in Journal
Algebraic Solution of Tropical Polynomial Optimization Problems
Next Article in Special Issue
Marginalized Two-Part Joint Modeling of Longitudinal Semi-Continuous Responses and Survival Data: With Application to Medical Costs
Previous Article in Journal
Contributions from Spatial Models to Non-Life Insurance Pricing: An Empirical Application to Water Damage Risk
Previous Article in Special Issue
High-Dimensional Mahalanobis Distances of Complex Random Vectors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Compositional Data Modeling through Dirichlet Innovations

by
Seitebaleng Makgai
1,†,
Andriette Bekker
1,† and
Mohammad Arashi
1,2,*
1
Department of Statistics, University of Pretoria, Pretoria 0028, South Africa
2
Department of Statistics, Faculty of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad 9177948974, Iran
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(19), 2477; https://doi.org/10.3390/math9192477
Submission received: 29 July 2021 / Revised: 24 September 2021 / Accepted: 27 September 2021 / Published: 3 October 2021
(This article belongs to the Special Issue Mathematical and Computational Statistics and Their Applications)

Abstract

:
The Dirichlet distribution is a well-known candidate in modeling compositional data sets. However, in the presence of outliers, the Dirichlet distribution fails to model such data sets, making other model extensions necessary. In this paper, the Kummer–Dirichlet distribution and the gamma distribution are coupled, using the beta-generating technique. This development results in the proposal of the Kummer–Dirichlet gamma distribution, which presents greater flexibility in modeling compositional data sets. Some general properties, such as the probability density functions and the moments are presented for this new candidate. The method of maximum likelihood is applied in the estimation of the parameters. The usefulness of this model is demonstrated through the application of synthetic and real data sets, where outliers are present.

1. Introduction

Compositional data sets have played a valuable role in the medical, genetics and biological sciences due to the relative information conveyed through proportions, probabilities and percentages, as stated by [1]. Reference [1] describes the sample space of a compositional data set to be on a simplex, where the sum of all data points equals one or some whole number.
The most popular distribution that is well-known in modeling compositional data sets is the Dirichlet distribution (see for example [2]). Literature contains varying generalizations of the Dirichlet distribution that have been well studied in the application of various compositional data sets (see for example [3,4,5,6,7,8]). Other generalizations that are studied in the literature are part of the Liouville distribution as described in [9,10,11]. In Bayesian statistics, the Dirichlet distribution is known as a conjugate prior of the multinomial distribution and it is best used in estimating categorical distributions.
An extension of the Dirichlet distribution, known as the Dirichlet-generated class of distributions, has recently been introduced and developed by [12]. This extension served as a flexible alternative to the well-known Dirichlet and generalized Dirichlet distributions, where its aim is to address the limitations that the Dirichlet distribution may pose when modeling certain compositional data sets. Consider a compositional data set, where diagnostic probabilities of a sample of 15 students are assigned by clinicians. The background of this data set is further explained in Section 6. Figure 1 gives a scatterplot of the probabilities and illustrates the fit of the Dirichlet distribution (bivariate case) to this data set. Figure 1 illustrates an opportunity where the fit of the Dirichlet distribution could be improved upon.
In [12], the beta-generating construction technique (pioneered and developed by [13]) is implemented to improve the fit of the Dirichlet distribution. The technique was an evolution from the univariate framework described below into a multivariate setting:
H ( x ) = 0 G ( x ) f ( y ) d y ,
with the probability density function (pdf)
h ( x ) = f ( G ( x ) ) g ( x ) ,
where G ( · ) is a continuous cumulative distribution function (cdf) and f ( · ) is the pdf of a random variable with support [ 0 ,   1 ] . By introducing extra parameters in f ( · ) and G ( · ) , the resulting distribution provides greater flexibility in adapting modality and skewness.
Motivated by (1), from a multivariate viewpoint, in the methodology of [12], a new distribution H ( x 1 , , x p ) for a random vector X = ( X 1 , X 2 , , X p ) , x i > 0 , i = 1 , 2 , , p , is constructed by nesting the cdf of a baseline distributions G i ( x i ) within the pdf of the generator distribution:
H ( x 1 , , x p ) = 1 B ( α ) 0 G 1 ( x 1 ) 0 G p ( x p ) i = 1 p y i α i 1 1 i = 1 p y i α p + 1 1 d y ,
with the pdf
h ( x 1 , x p ) = 1 B ( α ) i = 1 p g i ( x i ) G i ( x i ) α i 1 1 i = 1 p G i ( x i ) α p + 1 1 ,
for 0 < y i < 1 , i = 1 p y i < 1 , 0 < G i ( x i ) < 1 and where B ( α ) is the multivariate beta function. Here i = 1 p G i ( x i ) < 1 and g i ( x i ) and G i ( x i ) are the pdf and cdf of the baseline distributions, respectively. The authors [12] developed the Dirichlet-gamma distribution, where in this case, the gamma distribution is taken as the baseline distribution G i ( x i ) , i = 1 , 2 , , p , and the Dirichlet distribution is taken as the generator distribution.
In the univariate case, the Kummer-beta distribution is seen as an extension of the beta distribution (see the studies of [14,15,16]), it then follows that the multivariate Kummer-beta (refer as to Kummer–Dirichlet hereafter) distribution is also considered as an extension of the Dirichlet distribution (see [17]). Authors such as [14,15,16,18] have applied the generating technique to the Kummer-beta distribution, by coupling the cdf of different baseline distributions with the pdf of the Kummer-beta distribution. The development of generated distributions using the Kummer-beta distribution, has introduced distributions that add more flexibility in modeling data sets that are in the ( 0 , 1 ) domain (see [19] for an example).
In this paper, we propose a general multivariate construction methodology using the Kummer–Dirichlet (KD) pdf as the generator. This KD-generated class serves as a good alternative to the Dirichlet distribution for the statistical representation of specific proportional data. This class can be viewed as an evolution from the univariate framework into a multivariate setting as described in (3) but with the aim of offering more flexibility in modeling compositional data sets.
Thus, we introduce the KD distribution as the generating distribution, and a new class is proposed, with the following cdf
H ( x 1 , , x p ) = C 0 G 1 ( x 1 ) 0 G p ( x p ) i = 1 p y i α i 1 1 i = 1 p y i α p + 1 1 e x p λ i = 1 p y i d y ,
with α i > 0 for i = 1 , 2 , , p + 1 , < λ < , C as the normalizing constant, 0 < y i < 1 , i = 1 p y i < 1 , y = ( y 1 , y 2 , , y p ) and G i ( x i ) , i = 1 , 2 , , p , as the cdfs of a baseline distribution with i = 1 p G i ( x i ) < 1 . Distributions with cdf (5) and normalizing constant (9) shall be referred to as Kummer–Dirichlet generated distributions, where G i ( x i ) , i = 1 , 2 , , p , are the cdfs of a baseline distribution.
The contribution of this construction (5) highlights the importance of developing distributions that can improve the modeling of extreme observations in compositional data sets, where the Dirichlet might not be suitable or at a shortfall, as illustrated in Figure 1. For such cases and others that may arise, we propose a model with cdf (5). Thus, this novel study contributes to multivariate distribution theory from the following aspects:
1
The well-known beta-generator in the univariate case is extended to the Kummer–Dirichlet in the multivariate case.
2
A technique is proposed to construct multivariate distributions that combines a baseline distribution with a multivariate generator and evolves generating a plethora of possibilities of results.
3
We proposed a multivariate distribution that can be used for modeling compositional data with outliers.
4
Mathematical techniques are developed to derive the moment generating function of multivariate distributions.
The following showcases the organization of our contribution; in Section 2, the building blocks for the KD generator distribution, such as the normalizing constant of the pdf that corresponds to (5) is derived. In Section 3, the KD-gamma distribution is introduced, where we provide some technical results to derive the moments. In Section 4, the usefulness of the KD-Gamma distribution, as compared to the Dirichlet-gamma distribution, is seen through the application of a synthetic data analysis. Two real data sets, where outliers are present, are analyzed in Section 5. Finally, some conclusions are given in Section 6. Proof of the main results are put in the Appendix A.

2. Building Blocks of the Kummer–Dirichlet Distribution

The building blocks and notations necessary in the construction of distributions with cdf (5) are presented in this section. Since the Dirichlet distribution is an important building block, it is known that a random vector Y = ( Y 1 , , Y p ) R p is said to be Dirichlet (or standard Dirichlet) distributed with parameters α = ( α 1 , , α p ; α p + 1 ) for α i > 0 , i = 1 , , p + 1 , p 2 , if its pdf is given by
f ( y ) = C 1 ( α ) y 1 α 1 1 y p α p 1 1 i = 1 p y i α p + 1 1 .
From (6), one can denote Y p + 1 = 1 i = 1 p Y i and let Y = ( Y 1 , , Y p ; Y p + 1 ) = ( Y ; Y p + 1 ) . The random vectors Y and Y can be defined on Ω p and S p + 1 , respectively, where
Ω p = ( y 1 , , y p ) R p : i = 1 p y i < 1 , y i > 0 , i = 1 , , p
and
S p + 1 = ( y 1 , , y p + 1 ) R p + 1 : i = 1 p + 1 y i = 1 , y i > 0 , i = 1 , , p + 1 ,
for p 2 . The constant C 1 ( α ) in (6) is given as
C 1 1 ( α ) = Ω p i = 1 p + 1 y i α i 1 d y = i = 1 p + 1 Γ ( α i ) Γ ( α + ) = B α ,
where Γ ( · ) is the gamma function. Now using the Kummer-beta distribution (see [14]) as foundation building blocks, it follows that a random vector Y = ( Y 1 , , Y p ; Y p + 1 ) = ( Y ; Y p + 1 ) is said to be multivariate Kummer–Dirichlet distributed with parameters ( α , λ ) = ( α 1 , , α p ; α p + 1 , λ ) for α i > 0 , i = 1 , , p + 1 , p 2 and < λ < , if its pdf is given by
f ( y ) = C 2 ( α , λ ) y 1 α 1 1 y p α p 1 1 i = 1 p y i α p + 1 1 e x p λ i = 1 p y i ,
where y i > 0 and i = 1 p y i < 1 for i = 1 , , p . The following theorem gives the derivation of the normalizing constant C 2 ( α , λ ) .
Theorem 1.
In the general case of p 2 , the normalizing constant C 2 ( α , λ ) in pdf (8) is given by
1 C 2 ( α , λ ) = i = 1 p Γ ( α i ) Γ ( α p + 1 ) Γ ( i = 1 p α i + Γ ( α p + 1 ) ) m 1 , , m p 0 λ i = 1 p m i i = 1 p m i ! i = 1 p ( α i ) m i ( i = 1 p α i + α p + 1 ) i = 1 p m i = i = 1 p Γ ( α i ) Γ ( α p + 1 ) Γ ( i = 1 p α i + α p + 1 ) 1 F 1 i = 1 p α i ; i = 1 p α i + α p + 1 ; λ ,
where α i > 0 for i = 1 , 2 , , p , < λ < , ( α ) n denotes the Pochhammer function ( α ) n = Γ ( α + n ) Γ ( α ) and 1 F 1 ( . ; . ; . ) is the confluent hypergeometric function.
For the proof, refer to Appendix A.

2.1. Kummer–Dirichlet Generator

In this section, we give the definition of KD generated distribution with some technicalities.
Definition 1.
A random vector X = ( X 1 , X 2 , , X p ) is said to follow a Kummer–Dirichlet generated distribution, if its cdf is given by (5) and has pdf
h ( x ) = h ( x 1 , , x p ) = C 2 ( α , λ ) 1 i = 1 p G i ( x i ) α p + 1 1 i = 1 p g i ( x i ) G i α i 1 ( x i ) e x p λ G i ( x i ) ,
where C 2 ( α , λ ) is the normalizing constant (9), and where shape parameters α = ( α 1 , , α p + 1 ) are all > 0 , < λ < , g i ( x i ) and G i ( x i ) i = 1 , 2 , , p as the pdfs and cdfs, respectively, of the baseline distribution for i = 1 p G i ( x i ) < 1 . The random vector is then denoted as X K D G ( ψ ) , where ψ = ( α , λ , ρ ) with ρ as the parameters of the baseline distribution.

2.1.1. Special Cases

From cdf (5) and pdf (10), stem two classes of distributions as special cases of the Kummer–Dirichlet generated distribution.
  • Class of Dirichlet-generated distributions: When λ = 0 , the pdf (10) simplifies to the pdf of a Dirichlet-generated distribution, with baseline distribution G ( . ) and beta-generated marginal distributions (see [12,13]).
  • Class of Exponentiated Generalized-generated distributions: When λ = 0 and α p + 1 = 1 , then the pdf (10) tends to the multivariate exponentiated-generalized distribution (this distribution is not yet introduced in literature), whose marginal distributions are exponentiated-generalized distribution (see [20]).

2.1.2. Expansions and Marginals of the Kummer–Dirichlet Generated Distributions

Expanding and re-writing the exponential term e x p λ G i ( x i ) in series form in (10), results in an infinite weighted sum of Dirichlet-generated distributions, where in this case, the pdf (10) is given by
h ( x ) = h ( x 1 , , x p ) = C 2 ( α , λ ) m 1 , , m p 0 j = 1 p ( λ ) m j m j ! 1 i = 1 p G i ( x i ) α p + 1 1 × i = 1 p g i ( x i ) G i α i + m i 1 ( x i ) = C 2 ( α , λ ) m 1 , , m p 0 w m j 1 i = 1 p G i ( x i ) α p + 1 1 × i = 1 p g i ( x i ) G i α i + m i 1 ( x i ) ,
where the coefficient w m j = j = 1 p ( λ ) m j m j ! can be considered as the weights for m j 0 .
The binomial expansion in (11) where i = 1 p G i ( x i ) < 1 , can be expressed as
1 i = 1 p G i ( x i ) α p + 1 1 = k = 0 ( 1 ) k α p + 1 1 k G 1 ( x 1 ) + G 2 ( x 2 ) + + G p ( x p ) k = k = 0 v 1 , , v p 0 k v 1 + + v p = k ( 1 ) k k ! v 1 ! v p ! α p + 1 1 k G 1 v 1 ( x 1 ) G p v p ( x p ) .
It follows from (11) and (12) that the pdf of the Kummer–Dirichlet generated distribution can also be expressed as a linear combination of exponentiated distributions that were introduced by [21] and then expanded by [20,22,23], where the Weibull distribution was taken as the baseline distribution. Hence,
h ( x 1 , , x p ) = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k i = 1 p g i ( x i ) G i α i + m i + v i 1 ( x i ) ,
where j = 1 , 2 , , p and the coefficient w m j , v j , k given as
w m j , v j , k = j = 1 p ( λ ) m j m j ! ( 1 ) k k ! v 1 ! v p ! α p + 1 1 k .
The marginal pdfs of X i for i = 1 , 2 , , p if X K D G ( ψ ) , ψ = ( α , λ , ρ ) (see (10)), is given as
h i ( x i ) = C i ( α , λ ) g i ( x i ) G i α i 1 ( x i ) ( 1 G i ( x i ) ) j = 1 , i p + 1 α j 1 e x p λ G i ( x i ) × 1 F 1 α p + 1 ; j = i + 1 p α j + α p + 1 ; λ ( 1 G i ( x i ) ) ,
where C i ( α , λ ) is the normalizing constant of the marginal distribution, for i = 1 , 2 , , p , g i ( . ) and G i ( . ) for i = 1 , 2 , , p as the pdfs and cdfs, respectively, of the baseline distribution.

3. The Kummer–Dirichlet Gamma Distribution

In this section, we focus on the gamma distribution as the chosen baseline distribution. The gamma distribution, which belongs to the exponential class, is a flexible distribution model with a shape parameter, that may offer a good fit to a variety of different data sets [24]. The cdf and pdf of the gamma distribution with shape parameter δ   >   0 and scale parameter θ   >   0 are given as
G ( x ; δ , θ ) = γ ( δ , x θ ) Γ ( δ )
and pdf
g ( x ; δ , θ ) = 1 θ δ Γ ( δ ) x δ e x θ ,
where γ ( δ , x θ ) is the incomplete gamma function 0 x t δ 1 e t d t .
Thus, here, we explore the impact of the gamma distribution as the considered baseline distribution, where the cdf and pdf of the baseline distribution is given by (16) and pdf (17), respectively. In this case, G i ( . ) for i = 1 , 2 , , p are the cdfs of the gamma distribution with shape and scale parameters ρ = ( δ , θ ) for δ i > 0 ,   θ i > m i n ( x i ) , i = 1 , 2 , , p , we denote random vector X K D G a ( ψ ) as Kummer–Dirichlet gamma (KDGa) distributed where ψ = ( α , λ , δ , θ ) .
Figure 2, Figure 3 and Figure 4 illustrate the effect of the parameters ( α 1 , α 2 , α 3 , δ 1 , θ 1 , δ 2 , θ 2 , λ ) of the pdf (10). It is observed in Figure 2 that parameters ( α 1 , α 2 , α 3 ) illustrate the influence or “weight” of each random variable X i , in this case i = 1 , 2 . From Figure 2, it is observed that larger values of α 1 leads to skewness and heavier tails for random variable X 1 . Symmetry is observed in the first row of Figure 2 when ( α 1 , α 2 , α 3 ) = ( 2 , 2 , 2 ) . The parameters ( δ 1 , θ 1 , δ 2 , θ 2 ) influence the shape, peakness and the scale of the pdf (10). It is observed in Figure 3 that smaller values of δ i , i = 1 , 2 results in the pdf (10) concentrated on a smaller scale, while larger values of δ i , i = 1 , 2 results in the pdf (10) spread across a bigger scale of values. It is observed in Figure 4 that λ influences the tails, peakness and narrowness of the pdf (10). It is observed in the first row of Figure 4 that smaller values of λ results in heavier tails.

Moment Generating Function of the KDG

In this section, the moment generating function (mgf) and product moments of random vector X = ( X 1 , X 2 , , X p ) K D G a ( ψ ) , where ψ = ( α , λ , δ , θ ) are derived.
Theorem 2.
The mgf of random vector X K D G a ( α , λ , δ , θ ) is given by
M X ( t ) = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k × i = 1 p 1 θ i δ i ( 1 θ i t i ) δ i · 1 α i + m i + v i ,
where t = ( t 1 , , t p ) , C 2 ( α , λ ) is the normalizing constant (9), shape parameters α = α 1 , , α p , α p + 1   >   0 , w m j , v j , k as the coefficient given by (14) for j = 1 , 2 , , p ,   <   λ   <   , shape parameter δ i   >   0 and scale parameter θ i = m i n ( x i )   >   0 for i = 1 , 2 , , p .
For the proof, refer to Appendix A.
Theorem 3.
Let n i , i = 1 , , p be positive integer values. Then, the product moments of X K D G a ( α , θ , δ , λ ) is expressed in the following form
E = E i = 1 p X i n i = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k i = 1 p θ i n i Γ ( n i + δ i ) Γ ( δ i ) ( α i + m i + v i ) ,
where C 2 ( α , λ ) is the normalizing constant (9), shape parameters α = α 1 , , α p , α p + 1 > 0 , w m j , v j , k as the coefficient given by (14) for j = 1 , 2 , , p , < λ < , shape parameter δ i > 0 and scale parameter θ i = m i n ( x i ) > 0 for i = 1 , 2 , , p .
For the proof, refer to Appendix A.
For the illustration section and ease of reader, the moments for the bivariate case ( p = 2 ) of the Kummer=-Dirichlet gamma distribution is given as
E X 1 r X 2 s = C 2 ( α , λ ) m 1 , m 2 0 k = 0 v 1 , v 2 0 k ( λ ) m 1 + m 2 m 1 ! m 2 ! ( 1 ) k k ! v 1 ! v 2 ! α 3 1 k × θ 1 2 θ 2 2 Γ ( δ 1 + r ) Γ ( δ 2 + s ) Γ ( δ 1 ) Γ ( δ 2 ) ( α 1 + m 1 + v 1 ) ( α 2 + m 2 + v 2 ) ,
using the result of (15) and (19).

4. Synthetic Data Analysis

In this section, the performance of the Kummer–Dirichlet gamma and Dirichlet-gamma distributions are analyzed to illustrate the model capabilities for a synthetic data set.

4.1. Study 1

In the first simulation study, an artificial data set is generated via a specified seed value and through Weibull random variates using Algorithm 1. For this synthetic data, the Weibull random variates w i , i = 1 , 2 , 3 are generated using R; assuming that the random variable W is Weibull distributed [24], if W has cdf:
G ( w ) = 1 exp w ξ ν
and pdf
g ( w ) = ν ξ ν w ν 1 exp w ξ ν ,
where w 0 with shape parameter ν > 0 and scale parameter ξ > 0 . The construction of this synthetic data set, results in a compositional data set with a negative correlation. The seed for generating Weibull random variates is set at 7, with initial parameter values W 1 W e i 1.5 , 1.5 , W 2 W e i 4 , 2 and W 3 W e i 4 , 1 .
Algorithm 1: Synthetic data generation using the Weibull distribution.
Step 1.
Generate 100 random variates W i W e i ξ i , ν i for i = 1 , 2 , 3 .
Step 2.
Define random variables Y = Y 1 , Y 2 , Y 3 , where Y i = W i i = 1 3 W i , i = 1 , 2 , 3 and simulate a synthetic data set y = ( y 1 , y 2 , y 3 ) .
To measure the fit of the Kummer–Dirichlet gamma vs. the Dirichlet-gamma distribution, a ratio of Kolmogorov–Smirnov (KS) distance measures are calculated over a number of simulations as given in Algorithms 2 and 3. The following Algorithm 2 gives the steps used to assess the competence of the models. This ratio of KS measures describes a model testing technique developed by [12], called the empirical estimator of the cdf of a multivariate distribution. The technique compares the empirical cdfs of the observed and simulated datasets. The advantage of this technique is that one can also use the empirical cdfs to rank the simulated data. Ranking data makes it possible to calculate more accurate distances between the observed data points and the simulated points. More details regarding this technique of ranking simulated data in order to calculate the optimal distance between data points are available in [12].
This technique is used here to analyze the performances of the Dirichlet-gamma ( D G a ) and the Kummer–Dirichlet gamma ( K D G a ) distributions. In this technique, the empirical cdf of the generated data set and the cdf of the analyzed distributions are used in calculating a ratio of Kolmogorov–Smirnov (KS) distance measures, where the model with the smallest KS measure is considered as the most suitable amongst the two.
Algorithm 2: Computing the KS ratio measure.
Step 1.
Calculate the empirical cdf F x ^ = P X 1 ( i ) x 1 , X 2 ( i ) x 2 , , X p ( i ) x p for data points x ( i ) = ( x 1 ( i ) , , x p ( i ) ) , where i = ( 1 , , n ) and n as the sample size.
Step 2.
Obtain the estimates of the parameters of the two models, K D G a and D G a distributions.
Step 3.
Simulate two data sets of sizes d > n using the parameter estimates obtained in Step 2; x K D G a * = x 1 * , x 2 * , , x p * and x D G a * = x 1 * , x 2 * , , x p * .
Step 4.
For each generated data set x K D G a * and x D G a * , calculate the empirical cdfs
F x * ^ = P X 1 * x 1 , X 2 * x 2 , , X p * x p .
Step 5.
Compute the KS distance measures between F x ^ and F x * ^ as computed in Steps 2–4, where in this case, K S = max F x * ^ F x ^ .
Step 6.
Repeat Steps 2–5 m times, and compute the average of the KS measures.
Step 7.
Compare the average KS measures of the K D G a to the average KS measure of the D G a using the ratio K S o f K D G a K S o f D G a .
Algorithm 3: Computing the KS ratio measure using Weibull.
Step 1.
Generate a synthetic non-Dirichlet data set using the Weibull random variates as computed before Algorithm 1 and take the data set as the observed data.
Step 2.
Using this observed data set, obtain parameter estimates for the Kummer–Dirichlet gamma and Dirichlet-gamma distributions as the proposed models.
Step 3.
From the obtained parameter estimates simulate data sets of sizes d = 50 and 100.
Calculate the empirical cdfs F x * ^ for each simulation, as seen in Step 4 of Algorithm 2.
Step 4.
Calculate the KS measures between the empirical cdf F x ^ and the cdfs of the two competing models F x * ^ , for each group.
Step 5.
Repeat Steps 3 and 4, 50 times and compute the average KS measure for the two models.
Step 6.
Represent the KS measure of the K D G a and D G a as a ratio K S o f K D G a K S o f D G a for each simulated group of d = 50 and 100.
It is observed in Figure 5 that the Kummer–Dirichlet gamma distribution adds additional coverage to the generated artificial non-Dirichlet data set. A KS ratio of 0.89:1 illustrates that the KS distance of Kummer–Dirichlet gamma is 11% less of the KS distance measure of the Dirichlet-gamma distribution. This ratio indicates that through the numerous simulations, the KS measure of the Kummer–Dirichlet gamma distribution was observed to be smaller than the KS measure of the Dirichlet-gamma distribution.

4.2. Study 2

In this simulation, the Expected-Modification (EM) algorithm is used to estimate the parameters for generated samples of sizes 50 and 100 of the Kummer–Dirichlet gamma distribution. The EM algorithm is considered here, since the pdf of a Kummer–Dirichlet generated distribution can be expressed as a mixture of its special cases. The EM algorithm consists essentially of two main steps; the Expectation and Modification steps, with the main aim of maximizing the log-likelihood function ll ψ of the observed data with respect to the unknown vector of parameters ψ . It is summarized as follows:
  • Step 1:The E-step: In this step, the missing data Z are computed.
  • Step 2:The M-step: In this step, obtain the parameter estimates that maximizes K = E [ l n h ( X | ψ ) | Z ] , where l n h ( X | ψ ) is the log-likelihood function and h ( X | ψ ) is the pdf (10).
In the bivariate case, let Q be the observed data (generated through Algorithm 4), let Z be the missing data and let X * = ( Q , Z ) be the complete data set. In the case where samples of sizes n = 50 and n = 100 are drawn, let
ll ψ = i = 1 n log h ( x 1 i * , x 2 i * ; ψ )
be the log-likelihood function based on the complete data X * with parameters ψ = ( α , δ , θ , λ ) .
Algorithm 4: Generation of observed data for the EM algorithm.
Step 1.
Generate a random sample U U n i f ( 0 , 1 ) of size 30.
Step 2.
Generate a random sample of size 30 from the marginal distributions (15), where
C ( α , λ ) 0 G 1 ( q 1 ) y 1 α 1 1 1 y 1 α 2 + α 3 1 e λ y 1 1 F 1 α 3 ; α 2 + α 3 ; λ ( 1 y 1 ) d y 1 = U
C ( α , λ ) 0 G 2 ( q 2 ) y 2 α 2 1 1 y 2 α 1 + α 3 1 e λ y 2 1 F 1 α 3 ; α 1 + α 3 ; λ ( 1 y 2 ) d y 2 = U
Step 3.
Use the Unitroot function in R software to solve for q 1 and q 2 , where G i ( q i ) is the cdf of the gamma distribution (16).
Step 4.
Observe data Q = ( q 1 , q 2 ) .
To compute the missing data, let
Z i , j = 1 i f q i i s   f r o m   c l a s s j 0 o t h e r w i s e ,
for q i , i = 1 , 2 .
Samples of sizes 50 and 100 are generated using 100 trials for each group of fixed parameters. Hence 100 MLE’s of the model parameters (using the procedure in R package optim) is obtained. The mean, bias and mean square error (MSE)
Bias = 1 100 k = 1 100 ψ ^ k ψ t r u e and MSE = 1 100 k = 1 100 ψ ^ k ψ t r u e 2 ,
are calculated. In this case, ψ ^ k denotes the ML estimate of ψ t r u e (chosen parameter values) at the k t h replication.
Table 1 gives the results of simulation study 2, for chosen parameter values ψ = α 1 , α 2 , α 3 , δ 1 , δ 2 , θ 1 , θ 2 , λ = ( 3.03 , 7.92 , 4.10 , 1.96 , 1.32 , 0.36 , 0.59 , 1 ) . The results in Table 1 illustrate that the mean, MSE and bias of the parameter estimates decreases for larger sample sizes (n). The length of the asymptotic confidence intervals also decrease for increasing sample size.

5. Application

5.1. Diagnostic Probabilities Data Set Analysis

In this data, three behavioral states of attitudes or “diseases” of students known under the generic title of “newmath syndrome” are investigated and recorded using diagnostic probabilities. A sample of 15 students take part in this study, where diagnostic probabilities are assigned by clinicians for variables algebritis, bilateral paralexia and calculus deficiency.
The performance of the Dirichlet-gamma and the newly developed Kummer–Dirichlet gamma distributions are investigated here to see if these are suitable models for this data set, where the data has a correlation matrix given by
1 0.332 0.581 0.332 1 0.574 0.581 0.574 1 .
The initial parameter values needed for this performance test are obtained through a grid search using R software. The initial parameter values for the Dirichlet-gamma distribution are given as ( α 1 , α 2 , α 3 , δ 1 , θ 1 , δ 2 , θ 2 ) = ( 9 , 5 , 5 , 9 , 2.2 , 4 , 9.4 ) and ( α 1 , α 2 , α 3 , δ 1 , θ 1 , δ 2 , θ 2 , λ ) = ( 7 , 6 , 5 , 6 , 3 , 3 , 3 , 0.5 ) as initial values for the Kummer–Dirichlet gamma distribution. Goodness-of-fit measures such as the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are used to illustrate the overall performance of the Kummer–Dirichlet gamma and Dirichlet-gamma distribution, where the model with the lowest values of AIC and BIC measures is considered to preferred.
The results of Table 2 and Figure 6 illustrate that the Kummer–Dirichlet gamma distribution serves an alternative model for compositional data sets. Reference [12] illustrated that the Dirichlet-gamma is flexible in modeling compositional data sets; however, in this example, it is shown that the additional parameter λ adds flexibility, covering outliers where the Dirichlet-gamma distribution might not reach. The maximum likelihood value ( ll ), and the AIC and BIC measures also proves that the Kummer–Dirichlet gamma is a better alternative for this data set.

5.2. The Mice Morris Water Maze Behavior Data Set Analysis

In this experiment, the time spent by rodents in the four different quadrants of a water maze is analyzed. The Morris water maze is a behavioral test mostly used on rodents (see [25]). The experiment begins by placing a rodent in a circular pool of water, where it is required to swim until it finds an escape platform in the pool. The aim of the experiment is to investigate the memory abilities and or memory loss of different rodents. Figure 7 illustrates the experiment. In this data, seven wild-type rodents are placed in a pool of water, where the time spent in the different quadrants is recorded.
In the study [25], the Dirichlet distribution was used as a suitable model for distinguishing the proportion of time spent across the different quadrants. In this example, the performance of Dirichlet distribution and the newly developed KDGa distribution is thus compared to see if the KDGa distribution is superior, for this data set. The correlation matrix of this data is given by
1 0.543 0.162 0.538 0.543 1 0.213 0.055 0.162 0.213 1 0.441 0.538 0.055 0.441 1 .
The initial parameter values needed for this performance test, are obtained through a grid search using the R software. The initial parameter values for the Dirichlet distribution are given as ( α 1 , α 2 , α 3 , α 4 ) = ( 2 , 2 , 1 , 4 ) and ( α 1 , α 2 , α 3 , α 4 , δ 1 , θ 1 , δ 2 , θ 2 , δ 3 , θ 3 , λ ) = ( 1.27 , 1.37 , 1.20 , 0.56 , 1.15 , 2.13 , 0.84 , 1.04 , 1.06 , 1.07 , 1 ) as the initial values for the Kummer–Dirichlet gamma distribution.
Results of Table 3 illustrate that the Kummer–Dirichlet gamma distribution is a good competitor for this compositional data set. The estimation values of the parameters α ^ 1 , α ^ 2 , α ^ 3 , α ^ 4 indicates the “weight" of each quadrant. For both the Dirichlet and the KDGa distribution, the value of α ^ 1 is higher than the values of α ^ 2 , α ^ 3 , α ^ 4 , indicating that more time was spent in the first quadrant. The maximum likelihood value ( ll ), and the AIC and BIC measures also illustrate that the Kummer–Dirichlet gamma can be viewed as a good addition in analyzing this type of data set.

6. Conclusions and Discussion

In this paper, the Kummer–Dirichlet gamma (KDGa) distribution is presented, which is a member of the proposed Kummer–Dirichlet (KD) class of distributions. It is illustrated that other distributions and their marginal distributions emanate from this class of distributions, of which include the Dirichlet-generated, with marginal beta-generated distributions and the exponentiated-generalized distribution as well. The pdf and moments of the KDGa distribution can be expressed as an infinite sum of that of the Dirichlet-gamma (DGa) distributions. The impact and usefulness of the KDGa distribution are illustrated via synthetic and real data sets, where its performance is compared to that of the Dirichlet and DGa distributions. We illustrated how this innovation of the Dirichlet distribution proposes a better fit for compositional psychology diagnostic data sets where outliers are present. The extra parameter λ of the KDGa distribution proves to add more flexibility in modeling compositional data sets.
To conclude this section, we will briefly discuss two other applications of the proposed KD generator model. A generative discriminative classifier can be well-defined by solving the following compound of KD with a multinomial distribution integral
K D C M ( X | α ) = y M ( X | y ) f ( y ) d y
where f ( · ) is given by (8) and M ( X | y ) is the pdf of a multinomial distribution with object X and parameters y . See [26] for a recent similar approach. Another well-known application is in Bayesian analysis, where one can use the KD generator distribution as the prior in the multinomial distribution or allocation probabilities for clustering in a finite mixture model or either probabilistic graphical network modeling.

Author Contributions

Conceptualization, A.B. and M.A.; methodology, A.B., M.A. and S.M.; validation, S.M., M.A. and A.B.; formal analysis, S.M., A.B. and M.A.; investigation, S.M.; resources, S.M.; writing—original draft preparation, S.M.; writing—review and editing, M.A. and A.B.; visualization, S.M.; supervision, M.A. and A.B.; project administration, S.M.; funding acquisition, M.A. and A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was based upon research supported in part by the Visiting professor programme, University of Pretoria and the National Research Foundation (NRF) of South Africa, SARChI Research Chair UID: 71199; Reference: IFR170227223754 grant No. 109214; and Reference: SRUG190308422768 grant No. 120839. The opinions expressed and conclusions arrived at are those of the authors and are not necessarily to be attributed to the NRF. The research of the corresponding author is supported by a grant from Ferdowsi University of Mashhad (N.2/55271).

Data Availability Statement

The data used in this article may be simulated in R, using the stated seed value and parameter values. The first real data set is available from the “compositional” package in R software, and the second real data set is available from the article referenced in [25].

Acknowledgments

We would like to sincerely thank two anonymous reviewers for their constructive comments, which led us to put many details in the paper and improved the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of the Main Results

Proof of Theorem 1.  
Expanding the exponential term in (8), using the Taylor series, it follows that
1 C 2 ( α , λ ) = Ω p i = 1 p y i α i 1 1 i = 1 p y i α p + 1 1 e λ ( i = 1 p y i ) d y = m 1 , , m p 0 λ i = 1 p m i i = 1 p m i ! Ω p i = 1 p y i α i + m i 1 1 i = 1 p y i α p + 1 1 d y = m 1 , , m p 0 λ i = 1 p m i i = 1 p m i ! Ω p y 1 α 1 + m 1 1 i = 2 p y i α i + m i 1 × 1 y 1 i = 2 p y i α p + 1 1 d y = m 1 , , m p 0 λ i = 1 p m i i = 1 p m i ! Ω p y 1 1 i = 2 p y i α 1 + m 1 1 × 1 y 1 1 i = 2 p y i α p + 1 1 i = 2 p y i α i + m i 1 1 i = 2 p y i α 1 + m 1 + α p + 1 2 d y
Let q 1 = y 1 1 i = 2 p y i in (A1), where d y 1 = ( 1 i = 2 p y i ) d q 1 . Then
1 C 2 ( α , λ ) = m 1 , , m p 0 λ i = 1 p m i i = 1 p m i ! B ( α 1 + m 1 , α p + 1 ) Ω p 1 i = 2 p y i α i + m i 1 × 1 i = 2 p y i α 1 + m 1 + α p + 1 2 d y 2 d y 3 d y p ,
where B ( · , · ) is the beta function. The expression in (A2) can further be simplified by a continuous process of a change of variable q j = y j 1 j = i + 1 p y j for j = 1 , 2 , , p . The result of (A2) is solved in detail by [12]. Hence,
1 C 2 ( α , λ ) = m 1 , , m p 0 λ i = 1 p m i i = 1 p m i ! i = 1 p 1 B α i + m i , j = i + 1 p α j + m j + α p + 1 × B α p + m p , α p + 1 .
It then follows that
1 C 2 ( α , β ) = m 1 , , m p 0 λ i = 1 p m i i = 1 p m i ! i = 1 p 1 Γ ( α i + m i ) Γ ( j = i + 1 p α j + m j + α p + 1 ) Γ ( α i + m i + j = i + 1 p α j + m j + α p + 1 ) × Γ ( α p + m p ) Γ ( α p + 1 ) Γ ( α p + α p + 1 + m p ) = m 1 , , m p 0 λ i = 1 p m i i = 1 p m i ! i = 1 p 1 Γ ( α i ) Γ ( j = i + 1 p α j + α p + 1 ) Γ ( α i + j = i + 1 p α j + α p + 1 ) × Γ ( α p ) Γ ( α p + 1 ) Γ ( α p + α p + 1 ) × i = 1 p 1 ( α i ) m i ( j = i + 1 p α j + α p + 1 ) j = i + 1 p m j ( α i + j = i + 1 p α j + α p + 1 ) m i + j = i + 1 p m j ( α p ) m p ( α p + α p + 1 ) m p = i = 1 p Γ ( α i ) Γ ( α p + 1 ) Γ ( i = 1 p α i + Γ ( α p + 1 ) ) m 1 , , m p 0 λ i = 1 p m i i = 1 p m i ! i = 1 p ( α i ) m i ( i = 1 p α i + α p + 1 ) i = 1 p m i ,
which completes the proof, where α i for i = 1 , 2 , , p and where the summation of the Pochhammer functions can be represented as the confluent hypergeometric function 1 F 1 ( . ; . ; . ) . □
Proof of Theorem 2.  
By definition and using (10) and (13), it follows that
M X ( t ) = E e tX = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k R p i = 1 p e t i x i g i ( x i ) G i α i + m i + v i 1 ( x i ) d x = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k R p i = 1 p 1 θ i δ i Γ ( δ i ) e x i ( 1 θ i t i ) G i α i + m i + v i 1 ( x i ) d x = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k R p i = 1 p ( 1 θ i t i ) δ i θ i δ i ( 1 θ i t i ) δ i Γ ( δ i ) e x i ( 1 θ i t i ) × G i α i + m i + v i 1 ( x i ) d x = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k × E i = 1 p 1 θ i δ i ( 1 θ i t i ) δ i G i α i + m i + v i 1 ( Y i ) ,
where dx = ( d x 1 d x 2 d x p ) and Y i G a m m a ( θ i 1 θ i t i , δ i ) . Since G i ( Y i ) ( 0 , 1 ) , then let G i ( Y i ) U i U n i f ( 0 , 1 ) . The proof is completed by simplifying the expected value in (A4) as follows
E i = 1 p 1 θ i δ i ( 1 θ i t i ) δ i G i α i + m i + v i 1 ( Y i ) = i = 1 p 1 θ i δ i ( 1 θ i t i ) δ i 0 1 0 1 U i α i + m i + v i 1 d U = i = 1 p 1 θ i δ i ( 1 θ i t i ) δ i 1 α i + m i + v i ,
where dU = ( d U 1 d U 2 d U p ) . The result of (18) follows from (A5). □
Proof of Theorem 3.  
For random vector X = ( X 1 , , X p ) R p with pdf (10) (represented as (13)), for i = 1 p G i ( x i ) < 1 , it follows that
E = R p C 2 ( α , λ ) 1 i = 1 p G i ( x i ) α p + 1 1 i = 1 p x i n i g i ( x i ) G i α i 1 ( x i ) e λ G i ( x i ) d x = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k R p i = 1 p x i n i g i ( x i ) G i α i + m i + v i 1 ( x i ) d x = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k R p i = 1 p 1 θ i δ i Γ ( δ i ) x i n i + δ i e x i θ i G i α i + m i + v i 1 ( x i ) d x = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k R p i = 1 p θ i n i Γ ( n i + δ i ) Γ ( δ i ) 1 θ i n i + δ i Γ ( n i + δ i ) × x i n i + δ i e x i θ i G i α i + m i + v i 1 ( x i ) d x = C 2 ( α , λ ) m j , v j , k 0 w m j , v j , k × E i = 1 p θ i n i Γ ( n i + δ i ) Γ ( δ i ) G i α i + m i + v i 1 ( Y i ) ,
where dx = ( d x 1 d x 2 d x p ) , Y i G a m m a ( θ i , n i + δ i ) and G i ( Y i ) U i U n i f ( 0 , 1 ) . The proof is completed by simplyifing the expected value in (A6), following the same procedure as in (A5). □

References

  1. Aitchison, J. The statistical analysis of compositional data (with discussion). J. R. Stat. Soc. Ser. B 1982, 44, 139–177. [Google Scholar]
  2. Balakrishnan, N.; Nevzorov, V.B. A Primer on Statistical Distributions; John Wiley & Sons: New York, NY, USA, 2003. [Google Scholar]
  3. Barndorff-Nielsen, O.E.; Jorgensen, B. Some parametric models on the simplex. J. Multivar. Anal. 1991, 39, 106–116. [Google Scholar] [CrossRef] [Green Version]
  4. Connor, J.R.; Mosimann, J.E. Concepts of independence for proportions with a generalization of the Dirichlet distribution. J. Am. Stat. Assoc. 1969, 64, 194–206. [Google Scholar] [CrossRef]
  5. Epaillard, A.; Bouguila, N. Data-free metrics for Dirichlet and generalized Dirichlet mixture-based HMMs-A practical study. Pattern Recognit. 2019, 8, 207–219. [Google Scholar] [CrossRef]
  6. Favaro, S.; Hadjicharalambous, G.; Prunster, I. On a class of distributions on the simplex. J. Stat. Plan. Inference 2011, 141, 2987–3004. [Google Scholar] [CrossRef] [Green Version]
  7. Ng, K.W.; Tian, G.L.; Tang, M.L. Dirichlet and Related Distributions; Theory, Methods and Applications; John Wiley & Sons: New York, NY, USA, 2011; Volume 1. [Google Scholar]
  8. Thomas, S.; Jacob, J. A Generalized Dirichlet model. Stat. Probab. Lett. 2006, 76, 1761–1767. [Google Scholar] [CrossRef]
  9. Marshall, A.; Olkin, I. Inequalities: Theory of Majorization and Its Applications; Academic Press: New York, NY, USA, 1979. [Google Scholar]
  10. Gupta, R.D. Generalized Liouville Distributions. Comput. Math. Appl. 1996, 32, 103–109. [Google Scholar] [CrossRef] [Green Version]
  11. Sivazlian, B.D. On a Multivariate extension of the Gamma and Beta distributions. J. Appl. Math. Soc. Ind. Appl. Math. 1981, 41, 205–209. [Google Scholar] [CrossRef]
  12. Arashi, M.; Bekker, A.; de Waal, D.J.; Makgai, S.L. Constructing multivariate distributions via the Dirichlet generator. In Computational and Methodological Statistics and Biostatistics. Contemporary Essays in Advancement; Springer: Berlin/Heidelberg, Germany, 2020; pp. 159–186. [Google Scholar]
  13. Eugene, N.; Lee, C.; Famoye, F. Beta-normal distribution and its applications. Commun. Stat.-Theory Methods 2002, 31, 497–512. [Google Scholar] [CrossRef]
  14. Ng, K.W.; Kotz, S. Kummer-Gamma and Kummer-Beta Univariate and Multivariate Distributions; Research Report, 84; Department of Statistics, The University of Hong Kong: Hong Kong, China, 1995. [Google Scholar]
  15. Pescim, R.R.; Cordeiro, G.M.; Demetrio, C.G.B.; Ortega, E.M.M.; Nadarajah, S. The new class of Kummer beta generalized distributions. Stat. Oper. Trans. 2012, 36, 153–180. [Google Scholar]
  16. Pescim, R.R.; Cordeiro, G.M.; Nararajah, S.; Demetrio, C.G.B.; Ortega, E.M.M. The Kummer beta Birnbaum-Saunders: An alternative fatigue life distribution. Hacet. J. Math. Stat. 2014, 43, 473–510. [Google Scholar]
  17. Bran-Cardona, P.A.B.; Orozco-Castaneda, J.M.; Nagar, D.K. Bivariate Generalization of the Kummer-Beta Distribution. Revista Colombiana de Estadistica 1969, 34, 497–512. [Google Scholar]
  18. Pescim, R.R.; Nararajah, S. The Kummer Beta Normal: A New Useful-Skew Model. J. Data Sci. 2015, 13, 509–532. [Google Scholar] [CrossRef]
  19. Cordeiro, G.M.; Pescim, R.R.; Demetrio, C.G.B.; Ortega, E.M.M. The Kummer Beta Generalized Gamma Distribution. J. Data Sci. 2014, 12, 661–698. [Google Scholar] [CrossRef]
  20. Mudholkar, G.S.; Srivastava, D.K.; Friemer, M. The exponential Weibull family: A reanalysis of the bus-motor failure data. Technometrics 1995, 37, 436–445. [Google Scholar] [CrossRef]
  21. Gupta, R.C.; Gupta, P.L.; Gupta, R.D. Modeling failure time data by Lehmann alternatives. Commun. Stat.-Theory Methods 1998, 27, 887–904. [Google Scholar] [CrossRef]
  22. Gupta, A.K.; Kundu, D. Exponentiated exponential family: An alternative to gamma and Weibull distributions. Biom. J. 2001, 43, 117–130. [Google Scholar] [CrossRef]
  23. Mudholkar, G.S.; Srivastava, D.K.; Friemer, M. Exponentiated Weibull family for analyzing bathtub failure real data. IEEE Trans. Reliab. 1993, 42, 299–302. [Google Scholar] [CrossRef]
  24. Bain, L.J.; Engelhardt, M. Introduction to Probability and Mathematical Statistics, 2nd ed.; Brooks/Cole Cengage Learning: Boston, MA, USA, 1992. [Google Scholar]
  25. Maugard, M.; Doux, C.; Bonvento, G.A. new statistical method to analyze Morris Water Maze data using Dirichlet distribution. F1000Research 2019, 8, 1–14. [Google Scholar] [CrossRef] [PubMed]
  26. Zamzamy, N.; Bouguila, N. Hybrid generative discriminative approaches based on multinomial scaled Dircihlet. Appl. Intell. 2019, 49, 3783–3800. [Google Scholar] [CrossRef]
Figure 1. Plots of the data and the Dirichlet distribution on the diagnostic probabilities data set.
Figure 1. Plots of the data and the Dirichlet distribution on the diagnostic probabilities data set.
Mathematics 09 02477 g001
Figure 2. Example pdfs and contour plots for (10) for ( α 1 , α 2 , α 3 , δ 1 , θ 1 , δ 2 , θ 2 , λ ) when (a) (0.1,2,2,2,2,2,2,2), (b) (2,2,2,2,2,2,2,2), (c) (4,2,2,2,2,2,2,2), (d) (0.1,2,4,2,1.5,2,1.5,−2), (e) (1,2,4,2,1.5,2,1.5,−2) and (f) (4,2,4,2,1.5,2,1.5,−2).
Figure 2. Example pdfs and contour plots for (10) for ( α 1 , α 2 , α 3 , δ 1 , θ 1 , δ 2 , θ 2 , λ ) when (a) (0.1,2,2,2,2,2,2,2), (b) (2,2,2,2,2,2,2,2), (c) (4,2,2,2,2,2,2,2), (d) (0.1,2,4,2,1.5,2,1.5,−2), (e) (1,2,4,2,1.5,2,1.5,−2) and (f) (4,2,4,2,1.5,2,1.5,−2).
Mathematics 09 02477 g002
Figure 3. Example pdfs and contour plots of (10) for various values of ( α 1 , α 2 , α 3 , δ 1 , θ 1 , δ 2 , θ 2 , λ ) when (a) (2,2,2,0.5,2,2,2,2), (b) (2,2,2,1,2,2,2,2), (c) (2,2,2,1,2,1,2,2), (d) (2,2,2,1.8,0.8,2,2,2), (e) (2,2,2,1.8,1.5,2,2,2), (f) (2,2,2,1.8,3,2,2,2).
Figure 3. Example pdfs and contour plots of (10) for various values of ( α 1 , α 2 , α 3 , δ 1 , θ 1 , δ 2 , θ 2 , λ ) when (a) (2,2,2,0.5,2,2,2,2), (b) (2,2,2,1,2,2,2,2), (c) (2,2,2,1,2,1,2,2), (d) (2,2,2,1.8,0.8,2,2,2), (e) (2,2,2,1.8,1.5,2,2,2), (f) (2,2,2,1.8,3,2,2,2).
Mathematics 09 02477 g003
Figure 4. Example pdfs and contour plots of (10) for various values of ( α 1 , α 2 , α 3 , δ 1 , θ 1 , δ 2 , θ 2 , λ ) when (a) (2,2,2,2,2,2,2,−4), (b) (2,2,2,2,2,2,2,0), (c) (2,2,2,2,2,2,2,4), (d) (4,2,8,12,8,2,0.5,−8), (e) (4,2,8,12,8,2,0.5,−4), (f) (4,2,8,12,8,2,0.5,4).
Figure 4. Example pdfs and contour plots of (10) for various values of ( α 1 , α 2 , α 3 , δ 1 , θ 1 , δ 2 , θ 2 , λ ) when (a) (2,2,2,2,2,2,2,−4), (b) (2,2,2,2,2,2,2,0), (c) (2,2,2,2,2,2,2,4), (d) (4,2,8,12,8,2,0.5,−8), (e) (4,2,8,12,8,2,0.5,−4), (f) (4,2,8,12,8,2,0.5,4).
Mathematics 09 02477 g004
Figure 5. Performance of the Kummer–Dirichlet gamma and Dirichlet-gamma distributions on the synthetic compositional data set generated using Algorithm 3.
Figure 5. Performance of the Kummer–Dirichlet gamma and Dirichlet-gamma distributions on the synthetic compositional data set generated using Algorithm 3.
Mathematics 09 02477 g005
Figure 6. Contour plots of the Dirichlet-gamma and Kummer–Dirichlet gamma distributions on diagnostic probabilities data.
Figure 6. Contour plots of the Dirichlet-gamma and Kummer–Dirichlet gamma distributions on diagnostic probabilities data.
Mathematics 09 02477 g006
Figure 7. An illustration of the Morris water maze experiment.
Figure 7. An illustration of the Morris water maze experiment.
Mathematics 09 02477 g007
Table 1. Simulation results for sample size n = 50 and ψ = α 1 , α 2 , α 3 , δ 1 , δ 2 , θ 1 , θ 2 , λ = ( 3.03 , 7.92 , 4.10 , 1.96 , 1.32 , 0.36 , 0.59 , 1 ) .
Table 1. Simulation results for sample size n = 50 and ψ = α 1 , α 2 , α 3 , δ 1 , δ 2 , θ 1 , θ 2 , λ = ( 3.03 , 7.92 , 4.10 , 1.96 , 1.32 , 0.36 , 0.59 , 1 ) .
n = 50 α ^ 1 α ^ 2 α ^ 3 δ ^ 1 δ ^ 2 θ ^ 1 θ ^ 2 λ ^
Mean 3.111 9.204 5.973 2.186 2.122 1.060 1.527 1.893
Bias 0.081 1.284 1.873 0.226 0.802 0.7 0.937 0.893
MSE 4.807 6.194 5.299 4.332 5.916 5.818 4.410 3.909
Length of asymptotic CI 4.986 5.722 6.413 2.714 3.101 3.672 1.951 5.025
n = 100 α ^ 1 α ^ 2 α ^ 3 δ ^ 1 δ ^ 2 θ ^ 1 θ ^ 2 λ ^
Mean 3.058 8.005 4.207 1.986 1.585 0.631 0.984 1.048
Bias 0.028 0.055 0.107 0.026 0.265 0.271 0.394 0.048
MSE 3.937 5.027 4.731 3.908 5.182 3.922 4.008 3.029
Length of asymptotic CI 4.306 5.291 4.285 2.831 2.399 2.068 0.886 3.638
Table 2. Parameter estimates and the performance analysis for the diagnostic probabilities data set.
Table 2. Parameter estimates and the performance analysis for the diagnostic probabilities data set.
ModelMLE
α ^ 1 α ^ 2 α ^ 3 λ ^ δ ^ 1 θ ^ 1 δ ^ 2 θ ^ 2
DGa14.8041.9312.734n/a0.1530.9490.8360.268
KDGa17.825911.8065.0280.5240.1780.1090.2160.025
ll AICBIC
DGa−22.17458.34863.305
KDGa−17.96151.92157.586
Table 3. Parameter estimates and performance analysis for the mice Morris water maze behavior data set.
Table 3. Parameter estimates and performance analysis for the mice Morris water maze behavior data set.
ModelMLE
α ^ 1 α ^ 2 α ^ 3 α ^ 4 δ ^ 1 θ ^ 1 δ ^ 2 θ ^ 2 δ ^ 3
Dirichlet12.55610.6109.0229.492n/an/an/an/an/a
KDGa1.4931.3781.2020.5631.1472.1300.8441.0451.061
θ ^ 3 λ ^ ll AICBIC
Dirichletn/an/a−29.51365.02664.864
KDGa1.0751.000−16.05654.11253.517
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Makgai, S.; Bekker, A.; Arashi, M. Compositional Data Modeling through Dirichlet Innovations. Mathematics 2021, 9, 2477. https://doi.org/10.3390/math9192477

AMA Style

Makgai S, Bekker A, Arashi M. Compositional Data Modeling through Dirichlet Innovations. Mathematics. 2021; 9(19):2477. https://doi.org/10.3390/math9192477

Chicago/Turabian Style

Makgai, Seitebaleng, Andriette Bekker, and Mohammad Arashi. 2021. "Compositional Data Modeling through Dirichlet Innovations" Mathematics 9, no. 19: 2477. https://doi.org/10.3390/math9192477

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop