Next Article in Journal
Interactive Mesh Sculpting with Arbitrary Topologies in Head-Mounted VR Environments
Previous Article in Journal
Converting Tessellations into Graphs: From Voronoi Tessellations to Complete Graphs
Previous Article in Special Issue
Entry-Wise Eigenvector Analysis and Improved Rates for Topic Modeling on Short Documents
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Exact Density of the Eigenvalues of the Wishart and Matrix-Variate Gamma and Beta Random Variables

1
Department of Mathematics and Statistics, McGill University, Montreal, QC H3A 0G4, Canada
2
Department of Statistical and Actuarial Sciences, Western University, London, ON N6A 5B7, Canada
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2427; https://doi.org/10.3390/math12152427
Submission received: 13 June 2024 / Revised: 28 July 2024 / Accepted: 29 July 2024 / Published: 5 August 2024
(This article belongs to the Special Issue Theory and Applications of Random Matrix)

Abstract

:
The determination of the distributions of the eigenvalues associated with matrix-variate gamma and beta random variables of either type proves to be a challenging problem. Several of the approaches utilized so far yield unwieldy representations that, for instance, are expressed in terms of multiple integrals, functions of skew symmetric matrices, ratios of determinants, solutions of differential equations, zonal polynomials, and products of incomplete gamma or beta functions. In the present paper, representations of the density functions of the smallest, largest and j th largest eigenvalues of matrix-variate gamma and each type of beta random variables are explicitly provided as finite sums when certain parameters are integers and, as explicit series, in the general situations. In each instance, both the real and complex cases are considered. The derivations initially involve an orthonormal or unitary transformation whereby the wedge products of the differential elements of the eigenvalues can be worked out from those of the original matrix-variate random variables. Some of these results also address the distribution of the eigenvalues of a central Wishart matrix as well as eigenvalue problems arising in connection with the analysis of variance procedure and certain tests of hypotheses in multivariate analysis. Additionally, three numerical examples are provided for illustration purposes.

1. Introduction

The distributions of the eigenvalues associated with certain matrix-variate random variables are explored in this paper. The density functions of the matrix-variate gamma and beta distributions of both types will be specified further in this section along with a Jacobian of matrix transformation which is required to derive the results. Both the real and complex cases will be considered. A brief review of existing results will be provided as well.
Eigenvalues come about in numerous fields of scientific investigation which are likely to benefit from related distributional advances. For instance, they are utilized in the study of natural frequencies in mechanical and structural systems [1], they correspond to the energy levels of physical systems in quantum mechanics [2], they assist in the determination of the stability of dynamic systems [3], and they are used in quantum chemistry to describe atomic orbitals [4]. In their investigation on limits of extreme eigenvalues, [5] refer to signal processing, pattern recognition and edge detection in connection with the support of the spectral distribution of related covariance matrices. Actually, several statistical quantities are associated with eigenvalues or eigenvectors. Consider for example the determinant and the trace of a certain matrix-variate random variable; while the former is equal to the product of its eigenvalues, the latter is equal to their sum. Thus, the distributions of the determinant and the trace of such a matrix are available from those of simple functions of its eigenvalues.
The following notations will be utilized in this paper. Real scalar variables, whether mathematical variables or random variables, will be denoted by lower-case letters such as x ,   y and z . Capital letters such as X ,   Y and Z , will be used to denote real vector/matrix variables. Real or complex scalar constants will be written as a ,   b , , and real or complex vector/matrix constants, as A ,   B , . A tilde will be placed on top of letters such as x ˜ , y ˜ , X ˜ and Y ˜ , to denote variables in the complex domain. The determinant of a p × p matrix A = ( a i j ) will be denoted by | A | or det ( A ) whether the elements a i j are real or complex, and the absolute value of the determinant will be written as | det ( A ) | . For example, if det ( A ) = a + i b where i = ( 1 ) and a and b are real scalars, then its absolute value or modulus is | det ( A ) | = ( a 2 + b 2 ) = + det ( A A ) where A represents the complex conjugate transpose of A. If X = ( x i j ) is an m × n matrix whose elements x i j are distinct real scalar variables, then the wedge product of the differentials d x i j is written as d X = i = 1 m j = 1 n d x i j , where, for two real scalar variables x and y with differentials d x and d y , the wedge product is defined as d x d y = d y d x , so that d x d x = 0 and d y d y = 0 . If the real p × p matrix X = ( x i j ) is symmetric, that is, X = X , then d X = j d x i j = i j d x i j . If X ˜ in the complex domain is an m × n matrix, then X ˜ = X 1 + i X 2 where i = ( 1 ) and X 1 and X 2 are real, then d X ˜ = d X 1 d X 2 . If X ˜ is an m × n matrix in the complex domain and if f ( X ˜ ) is a real-valued scalar function of X ˜ , then the integral over X ˜ will be denoted as X ˜ f ( X ˜ ) d X ˜ . If f ( X ˜ ) is such that f ( X ˜ ) 0 for all X ˜ and X ˜ f ( X ˜ ) d X ˜ = 1 , then f ( X ˜ ) will be referred to as a statistical density function of X ˜ . If X ˜ = X ˜ , X ˜ is called a Hermitian matrix. If A = A is a p × p Hermitian matrix and Y ˜ is a p × 1 vector that is free of the elements of A, such that Y ˜ A Y ˜ > 0 for all non-null Y ˜ , then A = A as well as the Hermitian form Y ˜ A Y ˜ will be called Hermitian positive definite. The notation A > O indicates that the matrix A is positive definite. Letting A = A and Y be real, if Y A Y > 0 for all non-null Y , then A and Y A Y are said to be real positive definite. The trace of a square matrix A will be denoted tr ( A ) . If A and B are p × p constant matrices and if X is a p × p positive definite matrix-variate random variable, then O < A < X < B A > O , B > O , X > O , X A > O and B X > O .
Many authors have addressed the distributions of the largest, smallest and j th largest eigenvalues of a real central Wishart matrix, which is a special case of a real gamma distributed matrix whose density function is specified in (1). More specifically, a central Wishart random variable eventuates when α = m 2 and B = 1 2 Σ 1 , Σ > O , in a matrix-variate gamma distribution with the parameters ( α , B ) , B > O . If Σ = I , the parameters in the central Wishart case are ( m 2 , 1 2 I ) or ( m 2 , n 2 I ) if the Wishart matrix is obtained as the distribution of the maximum likelihood estimator of a covariance matrix. For convenience, it will be assumed that B = I ; however, the procedure is still valid for gamma distributions whose parameters are ( α , a I ) where a > 0 is a scalar quantity.
Earlier works can be found in several papers authored by [6]. In a series of papers, Khatri addressed the distributions of eigenvalues in the real and complex domains; one may refer for instance to [7,8], where the distributions of different types of matrix-variate random variables and their associated latent roots are determined. Davis dealt with the distributions of eigenvalues by creating and solving systems of differential equations, see for instance [9]. In a series of papers, Krishnaiah and his co-authors dealt with various distributional aspects of eigenvalues; the reader may refer for instance to [10]. Ref. [11] computed upper percentage points of the distribution of the eigenvalues of a Wishart matrix. Ref. [12] discussed the distribution and moments of the smallest eigenvalue of Wishart type matrices. Ref. [13] examined the distribution of the largest eigenvalue in the context of Principal Component Analysis. Ref. [14] derived computable representations of the joint probability density function of consecutively ordered eigenvalues of Wishart matrices. Ref. [15] obtained closed form expressions for the distribution functions of the minimum and maximum eigenvalues of so-called gamma-Wishart random matrices, which arise in the context of multiple-input multiple-output (MIMO) communication transmission. Refs. [16,17] also discussed certain distributional aspects in connection with the eigenvalues of Wishart matrices. Ref. [18] provided an effective recursion scheme to compute the distribution of the largest eigenvalue of the Wishart–Laguerre ensemble. The distribution of the eigenvalues associated with matrix-variate beta distributions has been investigated by [6,19,20,21,22], among others.
The methods employed in these papers yield representations of the distributions of the eigenvalues in terms of Pfaffians of skew symmetric matrices, incomplete gamma functions, multiple integrals, ratios of determinants, solutions of differential equations, functions of matrix arguments and zonal polynomials. None of these methods yield easily tractable representations of the density functions of the eigenvalues.
We now define the matrix-variate gamma and beta distributions in terms of their density functions. A real p × p matrix-variate gamma density function with shape parameter α and scale parameter matrix B > O , denoted by f 1 ( X ) , is given by
f 1 ( X ) d X = | B | α Γ p ( α ) | X | α p + 1 2 e tr ( B X ) d X , B > O , X > O , ( α ) > ( p 1 ) / 2 ,
where ( · ) indicates the real part of ( · ) and Γ p ( α ) is a real matrix-variate gamma function defined by the real matrix-variate gamma integral
Γ p ( α ) = X > O | X | α p + 1 2 e tr ( X ) d X , ( α ) > ( p 1 ) / 2
= π p ( p 1 ) 4 Γ ( α ) Γ ( α 1 / 2 ) Γ ( α ( p 1 ) / 2 ) , ( α ) > ( p 1 ) / 2 .
The function Γ p ( α ) is also referred to as the generalized gamma and multivariate gamma function. The corresponding p × p matrix-variate gamma density function in the complex domain, denoted by f ˜ 1 ( X ˜ ) , is the following:
f ˜ 1 ( X ˜ ) d X ˜ = | det ( B ) | α Γ ˜ p ( α ) | det ( X ˜ ) | α p e tr ( B X ˜ ) d X ˜ , X ˜ = X ˜ > O , B > O , ( α ) > p 1 ,
where Γ ˜ p ( α ) is the complex matrix-variate gamma function defined by
Γ ˜ p ( α ) = X ˜ > O | det ( X ˜ ) | α p e tr ( X ˜ ) d X ˜ , ( α ) > p 1 ,
= π p ( p 1 ) 2 Γ ( α ) Γ ( α 1 ) Γ ( α p + 1 ) , ( α ) > p 1 ,
with | det ( · ) | denoting the absolute value or modulus of the determinant of ( · ) . The matrix-variate gamma model that is being considered in this paper, is more versatile than one of its special cases, namely, the Wishart distribution, which is tied to Gaussian vector random variables. The matrix-variate gamma distribution might also prove relevant in connection with robustness studies against normality.
The real and complex p × p matrix-variate type-1 beta and type-2 beta density functions are the following, with the conditions ( α ) > p 1 2 , ( β ) > p 1 2 in the real case and ( α ) > p 1 , ( β ) > p 1 in the complex case:
f 2 ( X ) d X = Γ p ( α + β ) Γ p ( α ) Γ ( β ) | X | α p + 1 2 | I X | β p + 1 2 d X , O < X < I ,
f ˜ 2 ( X ˜ ) d X ˜ = Γ ˜ p ( α + β ) Γ ˜ p ( α ) Γ ˜ p ( β ) | det ( X ˜ ) | α p | det ( I X ˜ ) | β p d X ˜ , O < X ˜ < I ,
f 3 ( X ) d X = Γ p ( α + β ) Γ p ( α ) Γ p ( β ) | X | α p + 1 2 | I + X | ( α + β ) d X , X > O
and
f ˜ 3 ( X ˜ ) d X ˜ = Γ ˜ p ( α + β ) Γ ˜ p ( α ) Γ ˜ p ( β ) | det ( X ˜ ) | α p | det ( I + X ˜ ) | ( α + β ) d X ˜ , X ˜ > O .
The novel approach that will be utilized for securing useful representations of the joint density functions of the eigenvalues of certain matrix-variate random variables relies initially on the Jacobians of matrix transformation that are specified in the combined lemmas that follow. For their derivations as well as related results, the reader is referred to [23].
Lemma 1.
Let S and S ˜ be real positive definite and Hermitian positive definite p × p matrices, respectively. Let the eigenvalues of S be distinct and let them be denoted by λ 1 > λ 2 > > λ p > 0 . Since S ˜ is Hermitian positive definite, its eigenvalues are, as well, real and positive. Assuming that they are distinct, let us also denote them by λ 1 > λ 2 > > λ p > 0 . Then, on applying the full orthonormal (in the real domain) or unitary (in the complex domain) transformation and integrating out the differential elements corresponding to the orthonormal or unitary matrix over the full orthogonal or unitary group, we have
d S = π p 2 2 Γ p ( p 2 ) i < j ( λ i λ j ) d D , w i t h d D d λ 1 d λ p ,
and
d S ˜ = π p ( p 1 ) Γ ˜ p ( p ) i < j ( λ i λ j ) 2 d D , w i t h d D d λ 1 d λ p .
This paper is organized as follows. The joint density functions of the eigenvalues of the distributions of matrix-variate gamma and beta of both types are obtained in Section 2 where convenient representations of the products appearing in (7) and (7a) are provided. Representations of the density functions of the smallest, largest and j th largest eigenvalues of matrix-variate gamma and types 1 and 2 beta random variables are derived in Section 3, Section 4 and Section 5, respectively. It will be established that any such density function can be expressed as a finite sum when the product of the eigenvalues appearing in the corresponding joint density function is raised to an integer power; otherwise, in the so-called general case, the density function will be shown to be expressible as a series. In every instance, both the real and complex cases are treated. Additionally, each one of the last three sections features an illustrative numerical example.

2. Joint Density Functions of the Eigenvalues of Matrix-Variate Gamma and Type-1 and Type-2 Beta Random Variables in the Real and Complex Domains

The density functions of the real and complex matrix-variate gamma, type-1 beta and type-2 beta random variables in the real and complex domain are respectively given in (1), (4a), (5), (5a), (6) and (6a). On applying Lemma 1 and then, expressing the determinants as products of eigenvalues and the traces as sums of eigenvalues, one obtains the following joint density functions of the eigenvalues associated with the matrix-variate gamma distribution with the parameters ( α , I ) where I denotes the identity matrix, and the matrix-variate type-1 and type-2 beta distributions with the parameters ( α , β ) , with all the eigenvalues being denoted by λ 1 > λ 2 > > λ p > 0 , where λ 1 is less that one in the case of a matrix-variate type-1 random variable:
f 1 ( D ) d D = ζ 1 j = 1 p λ j α p + 1 2 e ( λ 1 + + λ p ) i < j ( λ i λ j ) d D
f ˜ 1 ( D ) d D = ζ ˜ 1 j = 1 p λ j α p e ( λ 1 + + λ p ) i < j ( λ i λ j ) 2 d D
f 2 ( D ) d D = ζ 2 j = 1 p λ j α p + 1 2 j = 1 p ( 1 λ j ) β p + 1 2 i < j ( λ i λ j ) d D
f ˜ 2 ( D ) d D = ζ ˜ 2 j = 1 p λ j α p j = 1 p ( 1 λ j ) β p i < j ( λ i λ j ) 2 d D
f 3 ( D ) d D = ζ 3 j = 1 p λ j α p + 1 2 j = 1 p ( 1 + λ j ) ( α + β ) i < j ( λ i λ j ) d D
f ˜ 3 ( D ) d D = ζ ˜ 3 j = 1 p λ j α p j = 1 p ( 1 + λ j ) ( α + β ) i < j ( λ i λ j ) 2 d D
where
ζ 1 = π p 2 2 Γ p ( p 2 ) 1 Γ p ( α ) , ( α ) > p 1 2 , ζ ˜ 1 = π p ( p 1 ) Γ ˜ p ( p ) 1 Γ ˜ p ( α ) , ( α ) > p 1 ,
ζ 2 = ζ 3 = π p 2 2 Γ p ( p 2 ) Γ p ( α + β ) Γ p ( α ) Γ p ( β ) , ( α ) > p 1 2 , ( β ) > p 1 2 , ζ ˜ 2 = ζ ˜ 3 = π p ( p 1 ) Γ ˜ p ( p ) Γ ˜ p ( α + β ) Γ ˜ p ( α ) Γ ˜ p ( β ) , ( α ) > p 1 , ( β ) > p 1 .

2.1. Simplification of the Factor i < j ( λ i λ j )

When attempting to derive the marginal density function of λ j for any fixed j, one has to expand the factor i < j ( λ i λ j ) appearing in the joint density functions specified in (8), (9) and (10). Useful representations of this factor are provided in this section.
It is well known that, in the real domain, one can write i < j ( λ i λ j ) as a Vandemonde determinant which, incidentally, has been utilized in connection with nonlinear transformations in [23]. That is,
i < j ( λ i λ j ) = λ 1 p 1 λ 1 p 2 λ 1 1 λ 2 p 1 λ 2 p 2 λ 2 1 λ p p 1 λ p p 2 λ p 1 | A | ,
where the ( i , j ) -th element of A denoted by a i j is equal to λ i p j for all i and j. We could consider a cofactor expansion of | A | consisting of expanding it in terms of the elements of A and their cofactors along any row. In this case, it would be advantageous to do so along the i-th row as the cofactors would then be free of λ i and the coefficients of the cofactors would only be powers of λ i . However, we would then lose the symmetry with respect to the elements of the cofactors in this instance. Since symmetry is required for the procedure to be successful, we will consider the general expansion of a determinant, that is,
| A | = K ( 1 ) ρ K a 1 k 1 a 2 k 2 a p k p = K ( 1 ) ρ K λ 1 p k 1 λ 2 p k 2 λ p p k p ( 2.5 )
where K = ( k 1 , , k p ) , with ( k 1 , , k p ) representing a given permutation of the integers ( 1 , 2 , , p ) . Defining ρ K as the number of transpositions needed to bring ( k 1 , , k p ) into the natural order ( 1 , 2 , , p ) , ( 1 ) ρ K will correspond to a minus sign for the corresponding term if ρ K is odd, the sign being otherwise positive. One interchange of two symbols corresponds to one transposition. For example, for p = 3 , the possible permutations are ( 1 , 2 , 3 ) , ( 1 , 3 , 2 ) , ( 2 , 3 , 1 ) , ( 2 , 1 , 3 ) , ( 3 , 1 , 2 ) , ( 3 , 2 , 1 ) , so that there are 3 ! = 6 terms. For the sequence ( 1 , 2 , 3 ) , k 1 = 1 , k 2 = 2 and k 3 = 3 ; for the sequence ( 1 , 3 , 2 ) , k 1 = 1 , k 2 = 3 and k 3 = 2 , and so on, K representing the sum of all such terms multiplied by ( 1 ) ρ K . For ( 1 , 2 , 3 ) , ρ K = 0 corresponding to a plus sign; for ( 1 , 3 , 2 ) , ρ K = 1 corresponding to a minus sign, and so on. In the complex case,
i < j ( λ i λ j ) 2 = | A | 2 = | A A | = | A A |
where
A = λ 1 p 1 λ 2 p 1 λ p p 1 λ 1 p 2 λ 2 p 2 λ p p 2 1 1 1 [ β 1 , β 2 , , β p ] , β j = λ j p 1 λ j p 2 1 .
Observe that β j only contains λ j and that A A = β 1 β 1 + + β p β p , so that for instance the ( i , j ) th element in β 1 β 1 is λ 1 2 p ( i + j ) . Accordingly, the ( i , j ) th element of β 1 β 1 + + β p β p is r = 1 p λ r 2 p ( i + j ) . Thus, letting B = A A = ( b i j ) , b i j = r = 1 p λ r 2 p ( i + j ) . Now, consider the expansion (12) of | B | , that is,
i < j ( λ i λ j ) 2 = | B | = | A A | = K ( 1 ) ρ K b 1 k 1 b 2 k 2 b p k p
where K = ( k 1 , , k p ) , ( k 1 , , k p ) being a permutation of the sequence ( 1 , 2 , , p ) . Note that
b 1 k 1 = λ 1 2 p ( 1 + k 1 ) + λ 2 2 p ( 1 + k 1 ) + + λ p 2 p ( 1 + k 1 ) b 2 k 2 = λ 1 2 p ( 2 + k 2 ) + λ 2 2 p ( 2 + k 2 ) + + λ p 2 p ( 2 + k 2 ) b p k p = λ 1 2 p ( p + k p ) + λ 2 2 p ( p + k p ) + + λ p 2 p ( p + k p ) .
Then, on writing
b 1 k 1 b 2 k 2 b p k p = r 1 , , r p λ 1 r 1 λ p r p ,
we have
i < j ( λ i λ j ) 2 = | B | K ( 1 ) ρ K r 1 , , r p λ 1 r 1 λ p r p ( 2 c . 5 )
where the r j ’s, j = 1 , , p , are nonnegative integers. We may now express the joint density function of the eigenvalues in a systematic way, whether in the real or complex domain.

3. The Distribution of the Eigenvalues of a Matrix-Variate Gamma Random Variable

Convenient representations of the joint density functions of λ 1 , , λ p appearing in (8) and (8a) can be obtained via (12) and (12a), respectively. In the real case, let the p × p real positive definite matrix S be matrix-variate gamma distributed with the parameters ( α , I ) . Denoting by g 1 ( D ) the joint density function of the λ j ’s associated with S, we have the following density function, with d D = d λ 1 d λ p :
g 1 ( D ) d D = π p 2 2 Γ p ( p 2 ) Γ p ( α ) j = 1 p λ j α p + 1 2 e ( λ 1 + + λ p ) K ( 1 ) ρ K λ 1 p k 1 λ p p k p d D = π p 2 2 Γ p ( p 2 ) Γ p ( α ) K ( 1 ) ρ K ( λ 1 m 1 λ p m p ) e ( λ 1 + + λ p ) d D
where
m j = α p + 1 2 + p k j = α + p 1 2 k j .
In the complex case, the corresponding joint density function is
g ˜ 1 ( D ) d D = π p ( p 1 ) Γ ˜ p ( p ) Γ ˜ p ( α ) K ( 1 ) ρ K r 1 , , r p ( λ 1 α p + r 1 λ p α p + r p ) e ( λ 1 + + λ p ) d D = π p ( p 1 ) Γ ˜ p ( p ) Γ ˜ p ( α ) K ( 1 ) ρ K r 1 , , r p ( λ 1 m 1 λ p m p ) e ( λ 1 + + λ p ) d D
with m j = α p + r j , the r j ’s being as defined in (12a). For convenience, we will use the same symbol m j for both the real and complex cases; however, in the real case, m j = α + p 1 2 k j , and in the complex case, m j = α p + r j .
In the next subsections, we will provide explicit representations of the exact density functions of any of the j th largest eigenvalue of a gamma distributed matrix as finite sums when the m j ’s as defined in (12) and (12a) are positive integers and in terms of infinite series in the general non-integer case, whether the domains are real or complex. These include the distributions of the largest and smallest eigenvalue as well as the joint distributions of successive eigenvalues under the general real and complex matrix-variate gamma distributions which, incidentally, also comprise the real and complex central Wishart distributions.

3.1. Density Function of the Smallest Eigenvalue of a Matrix-Variate Gamma Random Variable

We will initially examine the situation where m j = α p + 1 2 + p k j is a positive integer in the case of a real matrix-variate gamma density function. Clearly, either all the m j ’s, j = 1 , , p , are positive integers or none of them are. We will integrate out λ 1 , , λ p 1 from the joint density function specified in (13) to obtain the marginal density function of λ p . Since m j is a positive integer, we can integrate by parts. For instance,
λ 1 = λ 2 λ 1 m 1 e λ 1 d λ 1 = [ λ 1 m 1 e λ 1 ] λ 2 + [ m 1 λ 1 m 1 1 e λ 1 ] λ 2 + + [ e λ 1 ] λ 2 = μ 1 = 0 m 1 m 1 ! ( m 1 μ 1 ) ! λ 2 m 1 μ 1 e λ 2 ,
and integrating with respect to λ 1 , , λ j 1 , that is,
λ 1 λ 2 λ j 1 λ 1 m 1 λ j 1 m j 1 e ( λ 1 + + λ j 1 ) d λ 1 d λ j 1
yields
μ 1 = 0 m 1 m 1 ! ( m 1 μ 1 ) ! μ 2 = 0 m 1 μ 1 + m 2 ( m 1 μ 1 + m 2 ) ! 2 μ 2 + 1 ( m 1 μ 1 + m 2 μ 2 ) ! × μ j 1 = 0 m 1 μ 1 + + m j 1 ( m 1 μ 1 + + m j 1 ) ! ( j 1 ) μ j 1 + 1 ( m 1 μ 1 + + m j 1 μ j 1 ) ! λ j m 1 μ 1 + + m j 1 μ j 1 e j λ j ϕ j 1 ( λ j ) , j = 3 , , p .
Note that the complex counterpart of (14), will be identical since the same notation is retained in the complex case. Then, we have the following result:
Theorem 1.
When the m j ’s as defined in (13), that is, m j = α p + 1 2 + p k j , j = 1 , , p , are positive integers, the density function of λ p , the smallest eigenvalue of a real p × p gamma distributed matrix with parameters ( α , I ) is
g 1 p ( λ p ) d λ p = c K ϕ p 1 ( λ p ) λ p m p e λ p = c K μ 1 = 0 m 1 m 1 ! ( m 1 μ 1 ) ! μ 2 = 0 m 1 μ 1 + m 2 ( m 1 μ 1 + m 2 ) ! 2 μ 2 + 1 ( m 1 μ 1 + m 2 μ 2 ) ! μ p 1 = 0 m 1 μ 1 + + m p 1 × ( m 1 μ 1 + + m p 1 ) ! ( p 1 ) μ p 1 + 1 ( m 1 μ 1 + + m p 1 μ p 1 ) ! × λ p 1 m 1 μ 1 + + m p 1 μ p 1 + m p e p λ p d λ p
for 0 < λ p < , where
c K = π p 2 2 Γ p ( p 2 ) Γ p ( α ) K ( 1 ) ρ K ,
ϕ p 1 ( λ p ) is as specified in (14) and K = ( k 1 , , k p ) , as defined in (12).
In the case of a complex p × p matrix-variate gamma distribution, the expression for ϕ j 1 ( λ j ) given in (14) remains the same except that m j is then equal to α p + r j with r j being as defined in (12a). The resulting density function of λ p in the complex domain, which is denoted by g ˜ 1 p ( λ p ) , is specified in the next result.
Theorem 2.
When m j = α p + r j is a positive integer for j = 1 , , p , where r j is as defined in (12a), the density function of λ p , the smallest eigenvalue of a complex p × p matrix-variate gamma random variable, is given by
g ˜ 1 p ( λ p ) d λ p = c ˜ K ϕ p 1 ( λ p ) λ p m p e p λ p = c ˜ K μ 1 = 0 m 1 m 1 ! ( m 1 μ 1 ) ! μ 2 = 0 m 1 μ 1 + m 2 ( m 1 μ 1 + m 2 ) ! 2 μ 2 + 1 ( m 1 μ 1 + m 2 μ 2 ) ! μ p 1 = 0 m 1 μ 1 + + m p 1 × ( m 1 μ 1 + + m p 1 ) ! ( p 1 ) μ p 1 + 1 ( m 1 μ 1 + + m p 1 μ p 1 ) ! × λ p 1 m 1 μ 1 + + m p 1 μ p 1 + m p e p λ p d λ p
for 0 < λ p < , where
c ˜ K = π p ( p 1 ) Γ ˜ p ( p ) Γ ˜ p ( α ) K ( 1 ) ρ K r 1 , , r k .
Remark 1.
One can obtain the joint density function of all the remaining smallest eigenvalues from the j th step of integration, both in the real and complex cases. If the scale parameter matrix of the gamma distribution is of the form a I where a > 0 is a real scalar and I is the identity matrix, then the distributions of the eigenvalues can also be obtained from the proposed procedure since for any square matrix A, the eigenvalues of a A are a λ j where the λ j ’s are the eigenvalues of A , j = 1 , , p . Thus, a given eigenvalue λ j associated with a p × p real gamma distributed matrix with parameters ( α , a I ) will be distributed as a times the corresponding eigenvalue λ j of the p × p real gamma distributed matrix with parameters ( α , I ) . As an example, consider a p × p Wishart distributed matrix obtained from a sample of size n generated from a p-variate Gaussian population whose covariance matrix is the identity matrix. If the p × p real Wishart matrix is obtained from the product of an observation matrix and its transpose, then a = 1 / 2 and α = n / 2 . In the case of the distribution of the maximum likelihood estimator of the covariance matrix as obtained from a sample of size n, the λ j ’s are multiplied by the constant a = n / 2 and α will be equal to ( n 1 ) / 2 . In the complex domain, α will be an integer so that α p will always be an integer and Theorem 2 will yield the required distributions. In the real domain, α ( p + 1 ) / 2 will either be an integer, this case having been covered in Theorem 1, or an integer plus one half, in which instance the general case discussed in Section 3.3 applies.

3.2. Density Function of the Largest Eigenvalue of a Matrix-Variate Gamma Random Variable

Consider the case of m j = α p + 1 2 + p k j being an integer. One has to integrate out λ p , , λ 2 in order to obtain the marginal density function of λ 1 . As the initial step, consider the integration of λ p , that is,
λ p = 0 λ p 1 λ p m p e λ p d λ p = [ λ p m p e λ p ] 0 λ p 1 + + [ m p ! e λ p ] 0 λ p 1 = m p ! ν p = 0 m p m p ! ( m p ν p ) ! λ p 1 m p ν p e λ p 1 .
Now, multiplying each term by λ p 1 m p 1 e λ p 1 and integrating by parts, one obtains the second step integral:
m p ! λ p 1 = 0 λ p 2 λ p 1 m p 1 e λ p 1 d λ p 1 ν p = 0 m p m p ! ( m p ν p ) ! λ p 1 = 0 λ p 2 λ p 1 m p ν p + m p 1 e 2 λ p 1 d λ p 1
= m p ! m p 1 ! m p ! ν p 1 = 0 m p 1 m p 1 ! ( m p 1 ν p 1 ) ! λ p 2 m p 1 ν p 1 e λ p 2 ν p = 0 m p m p ! ( m p ν p ) ! ( m p ν p + m p 1 ) ! 2 m p ν p + m p 1 + 1 + ν p = 0 m p m p ! ( m p ν p ) ! ν p 1 = 0 m p ν p + m p 1 ( m p ν p + m p 1 ) ! 2 ν p 1 + 1 ( m p ν p + m p 1 ν p 1 ) ! λ p 2 m p ν p + m p 1 ν p 1 e 2 λ p 2 .
Proceeding successively in a similar manner, one will reach the ( p 1 ) th integration step, which will produce a finite linear combination of terms of the types obtained at the first and second step. Then, on multiplying this expression by the normalizing constant and λ 1 m 1 e λ 1 , one will obtain the portion of the density corresponding to a given permutation of the k j ’s. Finally, on adding up the expressions resulting from the p ! permutations while taking their respective signs into account, one will obtain the representation of the density function of λ 1 that is provided in the next theorem. It should be noted that these steps can readily be implemented for computational purposes.
Theorem 3.
Assume that m j = α p + 1 2 + p k j is a positive integer for j = 1 , , p , where k j is as defined in (12). On successively integrating λ p , , λ 2 from the joint density function of λ 1 , , λ p given in (13) while taking into account all the permulations of the k j ’s as defined in (12), one will obtain the following representation of the density function of λ 1 , the largest eigenvalue of a real p × p matrix-variate gamma random variable:
g 11 ( λ 1 ) d λ 1 = π p 2 2 Γ p ( α ) Γ p ( p 2 ) c λ 1 γ 1 e κ λ 1 d λ 1 , 0 < λ 1 < , ( 3.4 )
where the values of the c ’s, γ ’s and κ ’s result from applying the proposed sequential procedure.
One then has the following results as well:
  • The constant preceeding the summation sign in (16), that is, the normalizing constant of the joint density function of λ 1 , , λ p , can be obtained by integrating the sum term by term from 0 to and taking the inverse of the result, that is,
    c Γ ( γ ) κ γ 1 = π p 2 2 Γ p ( α ) Γ p ( p 2 ) = ζ 1 .
  • The cumulative distribution function will be
    F 11 ( ξ ) = ζ 1 c κ γ G ( γ , κ ξ )
    where G ( · , · ) denotes the lower incomplete gamma function with shape parameter γ .
  • The moment of order h will be
    m 11 ( h ) = ζ 1 c κ ( γ + h ) Γ ( γ + h ) ,
    where h must be such that the sum of h and the maximum of the γ ’s remain positive. On the basis of those moments, an approximate distribution could, for instance, be obtained by making use of Pearson curves or the moment-based methodologies introduced in [24,25], which happen to yield very accurate percentiles.
  • The characteristic function will be
    C 11 ( t ) = ζ 1 c Γ ( γ ) κ γ ( 1 i t / κ ) γ , i = ( 1 ) .
Example 1.
Let p = 4 and α = 9 / 2 , so that the joint density function of the eigenvalues is ζ 1 e λ 1 λ 2 λ 3 λ 4 λ 1 2 λ 2 2 λ 3 2 λ 4 2 A with A being an order four Vandermonde matrix which is defined in (11) and ζ 1 = 64 / 4725 . On sequentially integrating out λ 4 , λ 3 and λ 2 and taking into account the permutations of the k j ’s, the following representation of density function of λ 1 is obtained:
g 1 f ( λ 1 ) = ζ 1 ( 3 2 e 3 λ 1 λ 1 9 3 4 e 4 λ 1 λ 1 8 9 e 3 λ 1 λ 1 8 + 15 4 e 2 λ 1 λ 1 8 27 2 e 4 λ 1 λ 1 7 63 2 e 3 λ 1 λ 1 7 45 2 e 2 λ 1 λ 1 7 405 4 e 4 λ 1 λ 1 6 135 2 e 3 λ 1 λ 1 6 + 225 4 e 2 λ 1 λ 1 6 1575 4 e 4 λ 1 λ 1 5 225 4 e 3 λ 1 λ 1 5 135 4 e 2 λ 1 λ 1 5 + 45 4 e λ 1 λ 1 5 3375 4 e 4 λ 1 λ 1 4 + 135 e 3 λ 1 λ 1 4 405 4 e 2 λ 1 λ 1 4 135 e λ 1 λ 1 4 945 e 4 λ 1 λ 1 3 + 945 2 e 3 λ 1 λ 1 3 + 945 2 e λ 1 λ 1 3 945 2 e 4 λ 1 λ 1 2 + 945 2 e 3 λ 1 λ 1 2 + 945 2 e 2 λ 1 λ 1 2 945 2 e λ 1 λ 1 2 )
wherefrom the cumulative distribution function, moments and characteristic function of the largest eigenvalue can be determined. It was verified that the expression in parentheses integrates exactly to 4725 / 64 = Γ 4 ( 7 / 2 ) Γ 4 ( 4 / 2 ) / π 4 2 / 2 , and determined that, in this case, λ 1 whose density function is plotted in Figure 1, has a mean value of 9.001 and a standard deviation of 6.357.
Remark 2.
It should be noted that in every case considered in this paper, each successive integration step will actually yield a linear combination of terms of the types obtained in the respective first step of integration, and that, by following the procedure leading to (16), one will end up with a linear combination of univariate counterparts (or equivalently their functional forms) of the matrix-variate density function under consideration. For example, the density function (15) can manifestly be expressed as a linear combination of gamma density functions of the form (16) by substituting λ p to λ 1 in (16). Thus, representations of their cumulative distribution functions, moments and characteristic functions are available as well. All the calculations were carried out with the symbolic computation software Mathematica, the code being available from the second author upon request.
A procedure paralleling that leading to the density function specified in Theorem 3 can be utilized in the complex domain. In this instance, m j will be equal to α p + r j with r j being as defined in (12a). Now, referring to c ˜ K as defined in Theorem 2, one will have to take into account the p ! permutations of the k j ’s with their associated signs, as well as the p 2 summands in the sum over the r i ’s. Incidentally, these steps can readily be coded for computational work. Assuming that the m j ’s are positive integers and letting the density function in the complex domain be denoted by g ˜ 11 ( λ 1 ) , one has the following result:
Theorem 4.
Let m j = α p + r j be a positive integer for j = 1 , , p , with the r j ’s being as defined in (12a). Then, on successively integrating λ p , , λ 2 from the joint density function of λ 1 , , λ p , which is given in (13a), while taking into account all the combinations of the r j ’s, the density function of λ 1 will end up having the following form in the case of a complex p × p matrix-variate gamma distribution:
g ˜ 11 ( λ 1 ) d λ 1 = π p ( p 1 ) Γ ˜ p ( p ) Γ ˜ p ( α ) c λ 1 γ 1 e κ λ 1 d λ 1 , 0 < λ 1 < , ( 3.5 )
where the c ’s are the coefficients of the linear combination, which are resulting from applying the proposed sequential procedure.
Distributional results that are analogous to those following Theorem 3 also apply in the complex domain as g 11 ( λ 1 ) and g ˜ 11 ( λ 1 ) share the same structure.
Remark 3.
One can also compute the density function of the j th largest eigenvalue λ j from Theorems 1 and 3. In order to determine the density function of λ j , one has to integrate out λ 1 , , λ j 1 and λ p , λ p 1 , , λ j + 1 , the resulting expressions being available from the ( j 1 ) th step when integrating λ 1 , , λ j 1 and the ( p j ) th step when sequentially integrating λ p , λ p 1 , , λ j + 1 . As well, the joint density function of any set of consecutive λ j ’s can be similarly derived. One may also determine the density function of a specific λ j or the joint density function of any set of successive λ j ’s in the complex domain by applying the procedures outlined in connection with Theorems 2 and 4.

3.3. Density Function of the Largest Eigenvalue of a Matrix-Variate Gamma Random Variable in The General Case

By general case, it is meant that m j = α p + 1 2 + p k j is not a positive integer. In the case of a real Wishart distribution, m j can be a half-integer; however in the general case of a matrix-variate gamma distribution, α can be any real number greater than p 1 2 . To address this general case, we will expand the exponential part and then integrate term by term. Thus, the first step integral is
λ p = 0 λ p 1 λ p m p e λ p d λ p = ν p = 0 ( 1 ) ν p ν p ! λ p = 0 λ p 1 λ p m p + ν p d λ p = ν p = 0 ( 1 ) ν p ν p ! 1 ( m p + ν p + 1 ) λ p 1 m p + ν p + 1 .
Continuing this process will result in the following j th step integral:
ν p = 0 ( 1 ) ν p ν p ! 1 m p + ν p + 1 ν p 1 = 0 ( 1 ) ν p 1 ν p 1 ! 1 m p + ν p + m p 1 + ν p 1 + 2 ν p j + 1 = 0 ( 1 ) ν p j + 1 ν p j + 1 ! 1 m p + ν p + + m p j + 1 + ν p j + 1 + j Δ j ( λ p j ) , j = 2 , , p 1 .
Then, in the general real case, one will obtain the density function of λ 1 that is specified in the next result.
Theorem 5.
When m j = α p + 1 2 + p k j is not a positive integer, with k j being as specified in (12), the density function of λ 1 , the largest eigenvalue of a real p × p matrix-variate gamma random variable is given by
g 21 ( λ 1 ) d λ 1 = π p 2 2 Γ p ( p 2 ) Γ p ( α ) K ( 1 ) ρ K Δ p 1 ( λ 1 ) λ 1 m 1 e λ 1 d λ 1 , 0 < λ 1 < , ( 3.6 )
where Δ p 1 ( λ 1 ) is the expression resulting from the evaluation of the ( p 1 ) t h integration step.
The corresponding density function of λ 1 for a general complex matrix-variate gamma distribution is provided in the next theorem. Observe that in the case of a complex Wishart distribution, m j is necessarily an integer and hence, there is no need to consider the general case.
Theorem 6.
When m j = α p + r j is not a positive integer, where r j is as defined in (12a), the density function of λ 1 , the largest eigenvalue of a complex p × p matrix-variate gamma distribution, denoted by g ˜ 21 ( λ 1 ) , is given by
g ˜ 21 ( λ 1 ) d λ 1 = π p ( p 1 ) Γ ˜ p ( p ) Γ ˜ p ( α ) K ( 1 ) ρ K r 1 , , r p Δ p 1 ( λ 1 ) λ 1 m 1 e λ 1 d λ 1 , 0 < λ 1 < ,
where Δ p 1 ( λ 1 ) is the expression resulting from the evaluation of the ( p 1 ) t h integration step wherein m j = α p + r j .

3.4. Density Function of the Smallest Eigenvalue of a Matrix-Variate Gamma Random Variable in the General Case

Once again, ‘general case’ is understood to mean that m j = α p + 1 2 + p k j is not a positive integer, where k j is defined in (12). For the real Wishart distribution, the ‘general case’ corresponds to m j being a half-integer. In order to determine the density function of the smallest eigenvalue, we will integrate out λ 1 , , λ p 1 . Thus, we initially evaluate the following integral:
λ 1 = λ 2 λ 1 m 1 e λ 1 d λ 1 = Γ ( m 1 + 1 ) λ 1 = 0 λ 2 λ 1 m 1 e λ 1 d λ 1 = Γ ( m 1 + 1 ) μ 1 = 0 ( 1 ) μ 1 μ 1 ! 1 m 1 + μ 1 + 1 λ 2 m 1 + μ 1 + 1 .
The second step consists of integrating out λ 2 from the terms obtained in the first step multiplied by λ 2 m 2 e λ 2 :
Γ ( m 1 + 1 ) λ 2 = λ 3 λ 2 m 2 e λ 2 d λ 2 μ 1 = 0 ( 1 ) μ 1 μ 1 ! 1 m 1 + μ 1 + 1 λ 2 = λ 3 λ 2 m 1 + μ 1 + m 2 + 1 e λ 2 d λ 2
= Γ ( m 1 + 1 ) Γ ( m 2 + 1 ) Γ ( m 1 + 1 ) μ 2 = 0 ( 1 ) μ 2 μ 2 ! 1 m 2 + μ 2 + 1 λ 3 m 2 + μ 2 + 1 μ 1 = 0 ( 1 ) μ 1 μ 1 ! ( m 1 + μ 1 + 1 ) Γ ( m 1 + μ 1 + m 2 + 2 ) + μ 1 = 0 ( 1 ) μ 1 μ 1 ! 1 m 1 + μ 1 + 1 μ 2 = 0 ( 1 ) μ 2 μ 2 ! λ 3 m 1 + μ 1 + m 2 + μ 2 + 2 m 1 + μ 1 + m 2 + μ 2 + 2 W 2 ( λ 3 ) .
A pattern is already seen to emerge. There will be 2 j main terms in W j ( λ j + 1 ) , with half of them involving powers of λ j + 1 . Note that in computational work, the W j ( λ j + 1 ) ’s are successively generated automatically up to W p 1 ( λ p ) .
Theorem 7.
When m j = α p + 1 2 + p k j is not a positive integer, with k j being as specified in (12), the density function of λ p , the smallest eigenvalue of a real p × p matrix-variate gamma distribution is given by
g 2 p ( λ p ) d λ p = π p 2 2 Γ p ( p 2 ) Γ p ( α ) K ( 1 ) ρ K W p 1 ( λ p ) λ p m p e λ p d λ p , 0 < λ p < , ( 3.7 )
where W p 1 ( λ p ) is the expression resulting from the evaluation of the ( p 1 ) t h integration step.
The corresponding distribution of λ p for a complex matrix-variate gamma distribution in the general case is provided in the following theorem.
Theorem 8.
For the general case involving a complex p × p matrix-variate gamma distribution, in which instance m j = α p + r j is not a positive integer, with r j being as defined in (12a), the density function of the smallest eigenvalue λ p is given by
g ˜ 2 p ( λ p ) d λ p = π p ( p 1 ) Γ ˜ p ( p ) Γ ˜ p ( α ) K ( 1 ) ρ K r 1 , , r p W p 1 ( λ p ) λ p m p e λ p d λ p , 0 < λ p < ,
where W p 1 ( λ p ) is the expression obtained at the ( p 1 ) t h integration step, with m j = α p + r j .
In the complex Wishart case, α = m where m > p 1 is the number of degrees of freedom, which is a positive integer. Hence, in this instance, the m j ’s are integers and there is no need to resort to the general case.
Remark 4.
It should be observed that by integrating out λ 1 , , λ j 1 with the procedure described in Theorem 5 and integrating out λ p , , λ j + 1 in conjunction with the technique outlined in Theorem 7, one can also obtain the density function of any given λ j in the general case. As well, by proceeding similarly, one can derive the joint density function of any set of consecutive λ j ’s. Likewise, one could also determine the density function of a specific λ j or the joint density function of any set of successive λ j ’s in the complex domain by making use of the procedures described in Theorems 6 and 8.

4. The Distribution of the Eigenvalues of a Matrix-Variate Type-1 Beta Random Variable

In the case of real and complex matrix-variate type-1 beta distributions, the joint density functions of the eigenvalues 1 > λ 1 > λ 2 > > λ p > 0 are respectively available from (9) and (9a) wherein the product i < j ( λ i λ j ) is simplified as explained in Section 2.1. In the real domain, m j = α p + 1 2 + p k j where k j is specified in (12) whereas, in the complex domain, m j = α p + r j , where r j is defined in (12a). We begin with the density function of λ p , the smallest eigenvalue in the real case, assuming that m j is a positive integer.

4.1. Density Function of the Smallest Eigenvalue of a Matrix-Variate Type-1 Beta Random Variable

Let us assume for now that m j = α p + 1 2 + p k j is a positive integer, where the k j ’s are as given in (12). At first, λ 1 is integrated out, that is, letting b = β p + 1 2 ,
λ 1 = λ 2 1 λ 1 m 1 ( 1 λ 1 ) b d λ 1 = λ 1 m 1 ( 1 λ 1 ) b + 1 b + 1 | λ 2 1 + m 1 b + 1 λ 1 = λ 2 1 λ 1 m 1 1 ( 1 λ 1 ) b + 1 d λ 1 = = μ 1 = 0 m 1 m 1 ! ( m 1 μ 1 ) ! ( b + 1 ) μ 1 + 1 λ 2 m 1 μ 1 ( 1 λ 2 ) b + μ 1 + 1
where ( c ) m = c ( c + 1 ) ( c + m 1 ) is the Pochhammer symbol. The second step in the integration process consists of multiplying the sum obtained in the first step by λ 2 m 2 ( 1 λ 2 ) b and integrating the resulting expression with respect to λ 2 from λ 3 to 1. Thus, one has
μ 1 = 0 m 1 m 1 ! ( m 1 μ 1 ) ! ( b + 1 ) μ 1 + 1 μ 2 = 0 m 1 μ 1 + m 2 ( m 1 μ 1 + m 2 ) ! ( m 1 μ 1 + m 2 μ 2 ) ! ( 2 b + μ 1 + 2 ) μ 2 + 1 × λ 3 m 1 μ 1 + m 2 μ 2 ( 1 λ 3 ) 2 b + μ 1 + μ 2 + 2 .
Proceeding in such a manner, the step j integral, denoted by A j ( λ j + 1 ) , is obtained as
μ 1 = 0 m 1 m 1 ! ( m 1 μ 1 ) ! ( b + 1 ) μ 1 + 1 μ 2 = 0 m 1 μ 1 + m 2 ( m 1 μ 1 + m 2 ) ! ( m 1 μ 1 + m 2 μ 2 ) ! ( 2 b + m 1 + 2 ) μ 2 + 1 × μ j = 0 m 1 μ 1 + + m j ( m 1 μ 1 + + m j ) ! ( m 1 μ 1 + + m j μ j ) ! ( j b + μ 1 + + μ j 1 + j ) μ j + 1 × λ j + 1 m 1 μ 1 + + m j μ j ( 1 λ j + 1 ) j b + μ 1 + + μ j + j A j ( λ j + 1 ) , j = 2 , , p 1 .
On completing the ( p 1 ) th step and taking the permutations of the k j ’s into account, one will end up with the marginal density function of λ p which is given in the next result. Note that this density function could also be re-expressed as a linear combination of type-1 beta density functions.
Theorem 9.
When m j = α p + 1 2 + p k j is a positive integer, where k j is as defined in (12), the marginal density function of the smallest eigenvalue λ p of a real p × p matrix-variate type-1 beta random variable, is given by
g 3 p ( λ p ) d λ p = π p 2 2 Γ p ( α + β ) Γ p ( p 2 ) Γ p ( α ) Γ p ( β ) K ( 1 ) ρ K A p 1 ( λ p ) λ p m p ( 1 λ p ) b d λ p , 0 λ p 1 , ( 4.1 )
where A p 1 ( λ p ) is the expression obtained at the ( p 1 ) t h integration step.
The procedure being employed for deriving the density function of the smallest eigenvalue λ p is analogous in the complex case. As well, A j ( λ j + 1 ) will have the same representation as in the j th step integral except that m j = α p + r j and b = β p where r j is as defined in (12a), with m j remaining a positive integer. Then, we have the following counterpart to Theorem 9 in the complex domain:
Theorem 10.
When m j = α p + r j is a positive integer where r j is defined in (12a), the marginal density function of λ p , the smallest eigenvalue of a complex matrix-variate type-1 beta random variable, is given by
g ˜ 3 p ( λ p ) d λ p = π p ( p 1 ) Γ ˜ p ( α + β ) Γ ˜ p ( p ) Γ ˜ p ( α ) Γ ˜ p ( β ) K ( 1 ) ρ K r 1 , , r p A p 1 ( λ p ) λ p m p ( 1 λ p ) b d λ p , 0 λ p 1 ,
where A p 1 ( λ p ) is as defined in the real case except that m j = α p + r j and b = β p .

4.2. Density Function of the Largest Eigenvalue of a Matrix-Variate Type-1 Beta Random Variable

In the real case, we let m j = α p + 1 2 + p k j where k j is defined in (12) and b = β p + 1 2 . We first consider the case of m j being a positive integer. In order to obtain the density function of λ 1 , we integrate out λ p , λ p 1 , , λ 2 from the joint density function of the λ i ’s given in (9) wherein the second product is expressed as the sum appearing in (12). The first step integral is
λ p = 0 λ p 1 λ p m p ( 1 λ p ) b d λ p = λ p m p ( 1 λ p ) b + 1 b + 1 | 0 λ p 1 + m p b + 1 λ p = 0 λ p 1 λ p m p 1 ( 1 λ p ) b + 1 d λ p = = m p ! ( b + 1 ) m p + 1 ν p = 0 m p m p ! ( m p ν p ) ! ( b + 1 ) ν p + 1 λ p 1 m p ν p ( 1 λ p 1 ) b + ν p + 1 .
In the second step, the previous expression is multiplied by λ p 1 m p 1 ( 1 λ p 1 ) b and integrated with respect to λ p 1 . That is,
m p ! ( b + 1 ) m p λ p 1 = 0 λ p 2 λ p 1 m p 1 ( 1 λ p 1 ) b d λ p 1 ν p = 0 m p m p ! ( m p ν p ) ! ( b + 1 ) ν p + 1 × λ p 1 = 0 λ p 2 λ p 1 m p ν p + m p 1 ( 1 λ p 1 ) 2 b + ν p + 1 d λ p 1
= m p ! ( b + 1 ) m p + 1 m p 1 ! ( b + 1 ) m p 1 + 1 m p ! ( b + 1 ) m p + 1 ν p 1 = 0 m p 1 m p 1 ! ( m p 1 ν p 1 ) ! ( b + 1 ) ν p 1 + 1 × λ p 2 m p 1 ν p 1 ( 1 λ p 2 ) b + 1 + ν p 1 ν p = 0 m p m p ! ( m p ν p ) ! ( b + 1 ) ν p + 1 ( m p ν p + m p 1 ) ! ( 2 b + ν p + 2 ) m p ν p + m p 1 + 1 + ν p = 0 m p m p ! ( m p ν p ) ! ( b + 1 ) ν p + 1 ν p 1 = 0 m p ν p + m p 1 ( m p ν p + m p 1 ) ! ( m p ν p + m p 1 ν p 1 ) ! ( 2 b + ν p + 2 ) ν p 1 + 1 × λ p 2 m p ν p + m p 1 ν p 1 ( 1 λ p 2 ) 2 b + ν p + ν p 1 + 2 .
In light of the pattern emerging from the first two steps, it is seen that, the j th integration step will result in terms of the type λ p j ρ ( 1 λ p j ) δ as well as other terms that do not involve λ p j , j = 3 , , p 1 . The expression obtained at the ( p 1 ) th step will then be multiplied by λ 1 m 1 ( 1 λ 1 ) b . Finally, on taking into account all the permutations of the k j ’s and their associated signs as well as the normalizing constant, one will obtain a finite linear combination of the functional parts of type-1 beta density functions, which is the representation of the density function of λ 1 that is given in the next theorem. Note that this methodology can easily be coded for computational purposes.
Theorem 11.
When m j = α p + 1 2 + p k j is a positive integer where k j is defined in (12), the density function of λ 1 , the largest eigenvalue of a real p × p matrix-variate type-1 beta random variable, is of the form
g 31 ( λ 1 ) d λ 1 = π p 2 2 Γ p ( α + β ) Γ p ( p 2 ) Γ p ( α ) Γ p ( β ) c λ 1 γ 1 ( 1 λ 1 ) κ 1 d λ 1 , 0 λ 1 1 , ( 4.2 )
where the c ’s are the coefficients of the linear combination resulting from applying the proposed sequential procedure.
The following distributional results can readily be deduced from the representation of the density function of λ 1 appearing in Theorem 11:
  • The constant ζ 2 can be determined by integrating the sum term by term from zero to one and taking the inverse of the result, that is,
    c Γ ( γ ) Γ ( κ ) Γ ( γ + κ ) 1 = π p 2 2 Γ p ( α + β ) Γ p ( p 2 ) Γ p ( α ) Γ p ( β ) = ζ 2 .
  • The cumulative distribution function will be
    F 31 ( ξ ) = ζ 2 c B ξ ( γ , κ )
    where B ξ ( γ , κ ) denotes the incomplete beta function with parameters γ and κ .
  • The moment of order h will be
    m 31 ( h ) = ζ 2 c Γ ( γ ) Γ ( κ ) Γ ( γ + κ ) ρ = 0 h 1 γ + ρ γ + κ + ρ .
  • The characteristic function will be
    M 31 ( t ) = ζ 2 c Γ ( γ ) Γ ( κ ) Γ ( γ + κ ) 1 F 1 ( γ , γ + κ ; i t ) , i = ( 1 ) ,
    where 1F1 ( · , · ; · ) denotes Kummer’s confluent hypergeometric function.
Example 2.
Let p = 3 , α = 3 and β = 5 / 2 , in which case the the joint density function of λ 1 , λ 2 and λ 3 is ζ 2 λ 1 λ 2 λ 3 1 λ 1 1 λ 2 1 λ 3 A where ζ 2 = 33 , 075 / 2 is the normalizing constant and A is the order three Vandermonde matrix that is defined in (11). On completing the second integration step of the sequential procedure and proceeding as previously explained, one obtains the following density function which is plotted in Figure 2:
g 3 f ( λ 1 ) = ζ 2 ( 1 630 1 λ 1 λ 1 9 + 4 945 1 λ 1 λ 1 8 + 1 189 1 λ 1 λ 1 7 16 λ 1 7 945 2 63 1 λ 1 λ 1 6 + 16 λ 1 6 315 + 8 189 1 λ 1 λ 1 5 16 λ 1 5 315 16 945 1 λ 1 λ 1 4 + 16 λ 1 4 945 ) ,
wherefrom the distribution function, moments and characteristic function of λ 1 can easily be determined. Moreover, it was verified that the expression in parentheses integrates precisely to 2 / 33 , 075 Γ 3 3 2 Γ 3 5 2 Γ 3 ( 3 ) / ( π 9 / 2 Γ 3 11 2 ) , and determined that λ 1 has a mean value of 0.845 and a standard deviation of 0.109.
A methodology paralleling that employed in the real domain can be applied to obtain, in the complex case, the representation of the density function of λ 1 that is given in the next theorem.
Theorem 12.
When m j = α p + r j , j = 1 , , p , is a positive integer where r j is as defined in (12a), the density function of λ 1 , the largest eigenvalue of a complex p × p matrix-variate type-1 beta random variable, can be expressed as follows:
g ˜ 31 ( λ 1 ) d λ 1 = π p ( p 1 ) Γ ˜ p ( α + β ) Γ ˜ p ( p ) Γ ˜ p ( α ) Γ ˜ p ( β ) c λ 1 γ 1 ( 1 λ 1 ) κ 1 d λ 1 , 0 λ 1 1 ,
where the c ’s are the coefficients of the linear combination resulting from applying the proposed procedure wherein m j = α p + r j and b = β p .
Distributional results that are analogous to those following Theorem 11 also hold in the complex domain as g 31 ( λ 1 ) and g ˜ 31 ( λ 1 ) happen to have the same structure.
Remark 5.
At each step of integration, one can obtain the joint density function of the remaining eigenvalues, whether in the real or complex case. If the marginal density function of λ j for a specific j is needed, one can integrate out λ 1 , , λ j 1 by applying Theorem 9 or 10 and integrate out λ p , λ p 1 , , λ j + 1 by making use of Theorem 11 or 12 when m j is a positive integer, either in the real or complex case. In the case of a complex matrix-variate type-1 beta density function, the exponents of λ j and ( 1 λ j ) are α p and β p , respectively. Hence, these quantities are nonnegative integers if α > p 1 and β > p 1 are positive integers. If the complex matrix-variate type-1 beta distribution is obtained from transformed complex Wishart density functions, as in the case of one-to-one functions of the likelihood ratio statistics in tests of hypotheses on the parameters of Gaussian populations, then α and β are both positive integers. If α or β is a positive integer, then the general case as discussed in the next subsection is not required. However, in the real case, a general situation may exist in the sense that m j = α p + 1 2 + p k j need not be an integer. When m j as defined in Theorems 9–12 is not a positive integer and b = β ( p + 1 ) / 2 in the real case or b = β p in the complex case happens to be positive integers, one may then apply integration by parts with respect ( 1 λ j ) b instead of λ j m j .

4.3. The Distribution of the Eigenvalues of a Matrix-Variate Type-1 Beta Random Variable in the General Case

‘General case’ is understood to mean that m j = α p + 1 2 + p k j or m j = β p + 1 2 is not a positive integer in the real case, and m j = α p + r j or m j = β p is not a positive integer in the complex case. In the general case, we may expand ( 1 λ j ) b by using a binomial expansion since 0 < λ j < 1 , that is, ( 1 λ j ) ( b ) = n = 0 ( b ) n n ! λ j n , where ( b ) n is the Pochhammer symbol. Then, we may proceed term by term when integrating with respect to λ 1 , λ 2 , , or λ p , λ p 1 , , as was done in Section 3.3 and Section 3.4.

5. The Distribution of the Eigenvalues of a Matrix-Variate Type-2 Beta Random Variable

The joint density functions of the eigenvalues λ 1 > λ 2 > > λ p > 0 for the real and complex matrix-variate type-2 beta distributions are respectively available from (10) and (10a) wherein the second product can be simplified as explained in Section 2.1. In the real case, m j = α p + 1 2 + p k j where k j is as given in (12) and, in the complex case, m j = α p + r j where r j is as specified in (12a). We begin with the density function of the smallest eigenvalue λ p in the real domain when m j is a positive integer.

5.1. Density Function of the Smallest Eigenvalue of a Real Matrix-Variate Type-2 Beta Random Variable

Let m j , as defined in the introductory statement, be a positive integer. In order to secure the density function of λ p , one has to integrate out λ 1 , , λ p 1 . Letting b = ( α + β ) , the integral to be initially evaluated is
λ 1 = λ 2 λ 1 m 1 ( 1 + λ 1 ) b d λ 1 = λ 1 m 1 ( 1 + λ 1 ) b + 1 ( b 1 ) | λ 2 + m 1 b 1 λ 1 = λ 2 λ 1 m 1 1 ( 1 + λ 1 ) b + 1 d λ 1 = = μ 1 = 0 m 1 m 1 ! ( m 1 μ 1 ) ! [ b 1 ] μ 1 λ 2 m 1 μ 1 ( 1 + λ 2 ) b + 1 + μ 1
where
[ a ] m = a ( a 1 ) ( a m ) , [ a ] 0 = a .
By proceeding in this fashion, one obtains the step j integral as
μ 1 = 0 m 1 m 1 ! ( m 1 μ 1 ) ! [ b 1 ] μ 1 μ 2 = 0 m 1 μ 1 + m 2 ( m 1 μ 1 + m 2 ) ! ( m 1 μ 1 + m 2 μ 2 ) ! [ 2 b 2 μ 1 ] μ 2 × μ j = 0 m 1 μ 1 + + m j ( m 1 μ 1 + + m j ) ! ( m 1 μ 1 + m 2 μ 2 + + m j μ j ) ! [ j b j μ 1 μ j 1 ] μ j × λ j + 1 m 1 μ 1 + + m j μ j ( 1 + λ j + 1 ) j b + j + μ 1 + + μ j ψ j ( λ j + 1 ) .
Then, the density function of λ p , denoted by g 4 p ( λ p ) , is provided in the next result.
Theorem 13.
When m j = α p + 1 2 + p k j , j = 1 , , p , is a positive integer where k j is as defined in (12), the density function of λ p , the smallest eigenvalue of a real p × p matrix-variate type-2 beta random variable, is given by
g 4 p ( λ p ) d λ p = π p 2 2 Γ p ( α + β ) Γ p ( p 2 ) Γ p ( α ) Γ p ( β ) K ( 1 ) ρ K ψ p 1 ( λ p ) λ p m p ( 1 + λ p ) b d λ p , 0 λ p < , ( 5.1 )
where b = α + β and ψ p 1 ( λ p ) denotes the result of the ( p 1 ) t h step integral.
The procedure is analogous for the case of a complex matrix-variate type-2 beta distribution. However, in this instance, m j = α p + r j where r j is as defined in (12a). When this m j is a positive integer, the density function of the smallest eigenvalue λ p , denoted by g ˜ 4 p ( λ p ) , is as given in the next result.
Theorem 14.
When m j = α p + r j , j = 1 , , p , is a positive integer, where r j is as defined in (12a), the density function of λ p , the smallest eigenvalue of a complex p × p matrix-variate type-2 beta random variable, is
g ˜ 4 p ( λ p ) d λ p = π p ( p 1 ) Γ ˜ p ( α + β ) Γ ˜ p ( p ) Γ ˜ p ( α ) Γ ˜ p ( β ) K ( 1 ) ρ K r 1 , , r p ψ p 1 ( λ p ) λ p m p ( 1 + λ p ) b d λ p , 0 λ p < ,
where ψ p 1 ( λ p ) is as defined in the real case except that m j = α p + r j and b = α + β .
Remark 6.
The above procedure whether in the real or complex case holds when m j = β p + 1 2 + p k j is a positive integer in the real case and m j = β p + r j is a positive integer in the complex case. Upon making the transformation δ j = 1 λ j , j = 1 , , p , the parameters α and β are interchanged in the joint density function of δ 1 , , δ p . Hence, there is no need for a separate discussion when the condition applies to β instead of α. If a complex type-2 beta distribution corresponds to a transformed complex Wishart distribution as is the case in connection with certain analysis of variance problems, then both α and β are positive integers and consequently, the general case need not be considered since m j is then an integer.

5.2. Density Function of the Largest Eigenvalue of a Matrix-Variate Type-2 Beta Random Variable

Let us assume that m j = α p + 1 2 + p k j is a positive integer. In the process of integrating out λ p , λ p 1 , , λ 2 from the joint density function of the λ i ’s given in (10) wherein the second product is expressed in terms of the sum appearing in (12), the first step integral is
λ p = 0 λ p 1 λ p m p ( 1 + λ p ) b d λ p = λ p m p ( 1 + λ p ) b + 1 ( b 1 ) | 0 λ p 1 + m p ( b 1 ) λ p = 0 λ p 1 λ p m p 1 ( 1 + λ p ) b + 1 d λ p = = m p ! [ b 1 ] m p ν p = 0 m p m p ! ( m p ν p ) ! [ b 1 ] ν p λ p 1 m p ν p ( 1 + λ p 1 ) b + ν p + 1
where [ a ] m = a ( a 1 ) ( a m ) , with [ a ] 0 = a . This expression is multiplied by λ p 1 m p 1 ( 1 + λ p 1 ) b and integrated over λ p 1 to obtain the following representation of the second step integral:
m p ! [ b 1 ] m p λ p 1 = 0 λ p 2 λ p 1 m p 1 ( 1 + λ p 1 ) b d λ p 1 ν p = 0 m p m p ! ( m p ν p ) ! [ b 1 ] ν p λ p 1 m p ν p + m p 1 ( 1 + λ p 1 ) 2 b + ν p + 1 d λ p 1 = m p ! [ b 1 ] m p m p 1 ! [ b 1 ] m p 1 m p ! [ b 1 ] m p ν p 1 = 0 m p 1 m p 1 ! ( m p 1 ν p 1 ) ! [ b 1 ] ν p 1 × λ p 2 m p 1 ν p 1 ( 1 + λ p 2 ) b + 1 + ν p 1 ν p = 0 m p m p ! ( m p ν p ) ! [ b 1 ] ν p ( m p ν p + m p 1 ) ! [ 2 b 2 ν p ] m p ν p + m p 1 + ν p = 0 m p m p ! ( m p ν p ) ! [ b 1 ] m p ν p 1 = 0 m p ν p + m p 1 ( m p ν p + m p 1 ) ! ( m p ν p + m p 1 ν p 1 ) ! [ 2 b 2 ν p ] ν p 1 × λ p 2 m p ν p + m p 1 ν p 1 ( 1 + λ p 2 ) 2 b + 2 + ν p + ν p 1 .
A pattern is seen to develop. There will be 2 j main terms at the j th step of integration, of which 2 j 1 will be preceded by a plus sign and as many will start with a minus sign. As well, it can be observed that, at every step, the integrands will be linear combinations of terms of the type x r ( 1 + x ) δ where r is an integer, which on being integrated by parts from 0 to λ p j at the j th step, will invariably result in terms of the types obtained in the first step of integration with appropriate substitutions. Since the permutations of the k j ’s will not affect the structure of the ( p 1 ) th integral, the final representation of the density function of λ 1 will then be a finite linear combination of the functional parts of beta density functions of the second type. Hence the next result.
Theorem 15.
Let m j = α p + 1 2 + p k j , j = 1 , , p , be a positive integer, where k j is as defined in (12). On applying the the above-described sequential procedure, the density function of λ 1 , the largest eigenvalue of a real p × p matrix-variate type-2 beta random variable, can be expressed in the following form:
g 41 ( λ 1 ) d λ 1 = π p 2 2 Γ p ( α + β ) Γ p ( p 2 ) Γ p ( α ) Γ p ( β ) c λ 1 γ 1 ( 1 + λ 1 ) γ κ d λ 1 , 0 λ 1 < , ( 5.2 )
where γ could be equal to one and the c ’s are the coefficients of the linear combination so obtained.
Thus, the following results also hold:
  • The constant ζ 3 can be determined by integrating the sum, term by term, from 0 to and taking the inverse of the result, that is,
    c Γ ( γ ) Γ ( κ ) Γ ( γ + κ ) 1 = π p 2 2 Γ p ( α + β ) Γ p ( p 2 ) Γ p ( α ) Γ p ( β ) = ζ 3 .
  • The cumulative distribution function will be
    F 41 ( λ ) = ζ 3 c B λ 1 1 + λ 1 ( γ , κ )
    where B ( · ) ( γ , κ ) denotes the incomplete beta function with parameters γ and κ .
  • The moment of order h will be
    m 41 ( h ) = ζ 3 c Γ ( γ ) Γ ( κ ) Γ ( γ + κ ) ρ = 1 h γ + ρ 1 κ ρ , κ ρ > 0 .
Example 3.
Let p = 3 , α = 4 and β = 4 , so that the joint density function of the λ i ’s is ζ 3 λ 1 2 λ 2 2 λ 3 2 ( 1 + λ 1 ) 8 ( 1 + λ 2 ) 8 ( 1 + λ 3 ) 8 A , ζ 3 = 15 , 135 , 120 being the normalizing constant and A, the Vandermonde matrix of order three that is defined in (11). On integrating this density function with respect to λ 3 and λ 2 while taking into account the six permutations of the k j ’s along with their associated signs, and simplifying the resulting expression, one will obtain the following representation of the density function of λ 1 :
g 4 f ( λ 1 ) = 28 λ 1 11 5 λ 1 λ 1 λ 1 3 λ 1 + 26 + 91 + 143 + 429 λ 1 + 1 19 .
It was verified that g 4 f ( λ 1 ) indeed integrates to one and, determined that, in this case, λ 1 whose density function is plotted in Figure 3, has a mean value of 4.4755 and a standard deviation of 4.7243.
Analogous results apply in the complex domain, as the terms resulting from the j th step of integration of the joint density function (10a) will then be of the same type as those encountered in the real case. The companion result to Theorem 15 for the complex matrix-variate type-2 beta distribution is stated next.
Theorem 16.
When m j = α p + r j , j = 1 , , p , is a positive integer, where the r j is as defined in (12a), the density function of λ 1 , the largest eigenvalue of a complex p × p matrix-variate type-2 beta random variable, can be expressed in the following form:
g ˜ 41 ( λ 1 ) d λ 1 = π p 2 2 Γ p ( α + β ) Γ p ( p 2 ) Γ p ( α ) Γ p ( β ) c λ 1 γ 1 ( 1 + λ 1 ) γ κ d λ 1 , 0 λ 1 < ,
where the c ’s are the coefficients of the linear combination so obtained.

5.3. The Distribution of the Eigenvalues of a Matrix-Variate Type-2 Beta Random Variable in the General Case

In the general real case, consider the transformation
λ j = δ j 1 δ j d λ j = 1 ( 1 δ j ) 2 d δ j , 0 λ j < 0 δ j 1 and 1 + λ j = 1 1 δ j ,
so that
j = 2 p ( λ 1 λ j ) = j = 2 p ( δ 1 δ j ) ( 1 δ 1 ) p 2 j = 1 p ( 1 δ j ) i < j ( λ i λ j ) = i < j ( δ i δ j ) j = 1 p ( 1 δ j ) p 1 ,
and
i < j ( λ i λ j ) d λ 1 d λ p = i < j ( δ i δ j ) j = 1 p ( 1 δ j ) p + 1 d δ 1 d δ p .
Moreover, note that the transformation is order-preserving as λ i > λ j δ i > δ j . Then, re-expressing the joint density function of λ 1 , , λ p in terms of δ 1 , , δ p , one has
j = 1 p λ j α p + 1 2 j = 1 p ( 1 + λ j ) ( α + β ) i < j ( λ i λ j ) d λ 1 d λ p = j = 1 p δ j α p + 1 2 j = 1 p ( 1 δ j ) β p + 1 2 i < j ( δ i δ j ) d δ 1 d δ p ,
which establishes that the joint distribution of δ 1 , , δ p is that of the eigenvalues associated with a real matrix-variate type-1 beta random variable. Since this case has already been discussed in Section 4, no separate consideration of the distribution of the eigenvalues of a real matrix-variate type-2 random variable is required in the general case.
The distributions of the largest, smallest and j th largest eigenvalues of a real matrix-variate type-2 beta random variable can readily be determined from those secured for the real matrix-variate type-1 beta case. In a similar manner, the joint density function of δ 1 , , δ p in the complex case also corresponds to that specified in connection with a matrix-variate type-1 beta distribution, with the exponents of δ j and ( 1 δ j ) being α p and β p , respectively. Thus, in the general complex case, type-2 matrix-variate beta distributions can be converted to general complex type-1 beta distributions. Accordingly, no further discussion need be attempted, and all the distributional results that were initially delineated, have now been established.

6. Concluding Remarks

Simpler representations of the density functions of the smallest, largest and j th largest eigenvalues of nonsingular matrix-variate gamma and beta random variables of each type were obtained as finite sums when certain parameters are integers and, as explicit series, in the general situations. In each instance, both the real and complex cases were addressed. Three illustrative numerical examples corroborate the validity of the derived expressions. The eigenvalue structures arising from the various representations of the density functions could possibly be further explored for the singular cases, and the associated distributional aspects could then be investigated. However, some challenging technical difficulties would have to be overcome first. For example, if singular models are expressed in terms of submatrices, then extracting the original eigenvalues therefrom may prove difficult, if at all feasible. Certain distributional properties of singular matrix-variate distributions have already been studied in [26,27,28,29], among others.

Author Contributions

Conceptualization, A.M.M.; Methodology, A.M.M. and S.B.P.; Formal analysis, A.M.M. and S.B.P.; Writing—original draft, A.M.M.; Writing—review & editing, S.B.P.; Visualization, S.B.P. All authors have read and agreed to the published version of the manuscript.

Funding

The financial support of the Natural Sciences and Engineering Research Council of Canada is gratefully acknowledged by the second author.

Data Availability Statement

No data sets were generated or analyzed in this work.

Acknowledgments

The authors wish to thank the reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Adhikari, S.; Chakraborty, S. Random matrix eigenvalue problems in structural dynamics: An iterative approach. Mech. Syst. Signal Process. 2022, 164, 108260. [Google Scholar] [CrossRef]
  2. Gol’dshtein, M.S. Distribution of the eigenvalues of the energy operator of a continuous system in quantum statistical mechanics. Theor. Math. Phys. 1985, 63, 412–426. [Google Scholar] [CrossRef]
  3. Yadykin, I.B.; Iskakov, A.B. Comparison of sub-Gramian analysis with eigenvalue analysis for stability estimation of large dynamical systems. Autom. Remote. Control. 2018, 79, 1767–1779. [Google Scholar] [CrossRef]
  4. Reis, C.L.; Martins, J.L. Practical band interpolation with a modified tight-binding method. J. Physics. Condens. Matter 2019, 31, 215501. [Google Scholar]
  5. Bai, Z.; Silverstein, J.W. Limits of Extreme Eigenvalues. In Spectral Analysis of Large Dimensional Random Matrices; Springer Series in Statistics; Springer: New York, NY, USA, 2010. [Google Scholar] [CrossRef]
  6. Pillai, K.C.S. On the distribution of the largest seven roots of a matrix in multivariate analysis. Biometrika 1964, 51, 270–275. [Google Scholar] [CrossRef]
  7. Khatri, C.G. Distribution of the largest or smallest characteristic root under null hypothesis concerning complex multivariate normal populations. Ann. Math. Stat. 1964, 35, 1807–1810. [Google Scholar]
  8. James, A.T. Distributions of matrix variates and latent roots derived from normal samples. Ann. Math. Stat. 1964, 35, 475–501. [Google Scholar] [CrossRef]
  9. Davis, A.W. On the marginal distributions of the latent roots of the multivariate beta matrix. Ann. Math. Stat. 1972, 43, 1664–1670. [Google Scholar] [CrossRef]
  10. Krishnaiah, P.R.; Schuurmann, F.J.; Waikar, V.B. Upper percentace points of the intermediate roots of the manova matrix. Sankhya Indian J. Stat. Ser. 1973, 35, 339–358. [Google Scholar]
  11. Clemm, D.S.; Chattopadhyay, A.K.; Krishnaiah, P.R. Upper percentage points of the individual roots of the Wishart matrix. Sankhya Ser. B 1973, 35, 325–338. [Google Scholar]
  12. Edelman, A. The distribution and moments of the smallest eigenvalue of a random matrix of Wishart type. Linear Algebra Appl. 1991, 159, 55–80. [Google Scholar] [CrossRef]
  13. Johnstone, I.M. On the distribution of the largest eigenvalue in Principal Components Analysis. Ann. Stat. 2001, 29, 295–327. [Google Scholar] [CrossRef]
  14. Zanella, A.; Chiani, M.; Win, M.Z. A general framework for the distribution of the eigenvalues of Wishart matrices. In Proceedings of the 2008 IEEE International Conference on Communications, Beijing, China, 19–23 May 2008; pp. 1271–1276. [Google Scholar]
  15. Dharmawansa, P.; McKay, M.R. Extreme eigenvalue distributions of Gamma-Wishart random matrices. In Proceedings of the 2011 IEEE International Conference on Communications (ICC), Kyoto, Japan, 5–9 June 2011; pp. 1–6. [Google Scholar]
  16. Chiani, M. Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and a simple approximation for the Tracy-Widom distribution. J. Multivar. Anal. 2014, 129, 69–81. [Google Scholar] [CrossRef]
  17. James, O.; Lee, H.-N. Concise Probability Distributions of Eigenvalues of Real-Valued Wishart Matrices. 2021. Available online: https://arxiv.org/ftp/arxiv/paper/1402.6757.pdf (accessed on 12 June 2024).
  18. Forrester, P.J.; Kumar, S. Recursion scheme for the largest β-Wishart-Laguerre eigenvalue and Landauer conductance in quantum transport. J. Phys. A 2019, 52, 42LT02. [Google Scholar] [CrossRef]
  19. Sheena, Y.; Gupta, A.K.; Fujikoshi, Y. Estimation of the eigenvalues of noncentrality parameter in matrix variate noncentral beta distribution. Ann. Inst. Stat. Math. 2004, 56, 101–125. [Google Scholar] [CrossRef]
  20. Díaz-García, J.A.; Gutiérrez-Jáimez, R. Singular matrix variate beta distribution. J. Multivar. Anal. 2008, 99, 637–648. [Google Scholar] [CrossRef]
  21. Díaz-García, J.A.; Gutiérrez-Jáimez, R. Doubly singular matrix variate beta type I and II and singular inverted matricvariate t distributions. J. Korean Stat. Soc. 2009, 38, 297–303. [Google Scholar] [CrossRef]
  22. Díaz-García, J.A.; Gutiérrez-Jáimez, R. Noncentral bimatrix variate generalised beta distributions. Metrika 2011, 73, 317–333. [Google Scholar] [CrossRef]
  23. Mathai, A.M. Jacobians of Matrix Transformations and Functions of Matrix Argument; World Scientific Publishing: New York, NY, USA, 1997. [Google Scholar]
  24. Provost, S.B. Moment-based density approximants. Math. J. 2005, 9, 727–756. [Google Scholar]
  25. Provost, S.B.; Zareamoghaddam, H.; Ahmed, S.E.; Ha, H.-T. The generalized Pearson family of distributions and explicit representation of the associated density functions. Commun. Stat. Theory Methods 2020, 49, 1–17. [Google Scholar] [CrossRef]
  26. Díaz-García, J.A.; Gutiérrez-Jáimez, R. Proof of the conjectures of H. Uhlig on the singular multivariate beta and the Jacobian of a certain matrix transformation. Ann. Stat. 1997, 25, 2018–2023. [Google Scholar] [CrossRef]
  27. Srivastava, M.S. Singular Wishart and multivariate beta distributions. Ann. Stat. 2003, 31, 1537–1560. [Google Scholar] [CrossRef]
  28. Shimizu, K.; Hashiguchi, H. Heterogeneous hypergeometric functions with two matrix arguments and the exact distribution of the largest eigenvalue of a singular beta-Wishart matrix. J. Multivar. Anal. 2021, 183, 104714. [Google Scholar] [CrossRef]
  29. Mathai, A.M.; Provost, S.B. On the singular gamma, Wishart, and beta matrix-variate density functions. Can. J. Stat. 2022, 50, 1143–1165. [Google Scholar] [CrossRef]
Figure 1. The density function of λ 1 (cf. Example 1).
Figure 1. The density function of λ 1 (cf. Example 1).
Mathematics 12 02427 g001
Figure 2. The density function of λ 1 (cf. Example 2).
Figure 2. The density function of λ 1 (cf. Example 2).
Mathematics 12 02427 g002
Figure 3. The density function of λ 1 (cf. Example 3).
Figure 3. The density function of λ 1 (cf. Example 3).
Mathematics 12 02427 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mathai, A.M.; Provost, S.B. The Exact Density of the Eigenvalues of the Wishart and Matrix-Variate Gamma and Beta Random Variables. Mathematics 2024, 12, 2427. https://doi.org/10.3390/math12152427

AMA Style

Mathai AM, Provost SB. The Exact Density of the Eigenvalues of the Wishart and Matrix-Variate Gamma and Beta Random Variables. Mathematics. 2024; 12(15):2427. https://doi.org/10.3390/math12152427

Chicago/Turabian Style

Mathai, A. M., and Serge B. Provost. 2024. "The Exact Density of the Eigenvalues of the Wishart and Matrix-Variate Gamma and Beta Random Variables" Mathematics 12, no. 15: 2427. https://doi.org/10.3390/math12152427

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop