1. Introduction
We will introduce an extended logistic model through the exponentiation of a real scalar type-2 beta density. For
and for the real scalar variable
t, consider the density
and
elsewhere. Usually, the parameters in a statistical density are real and hence we take real parameters throughout the paper. The above function
is integrable even if the parameters are in the complex domain. In this case, the conditions will be
, where
denotes the real part of
. Consider an exponentiation; namely, let
and
and then the density of
x, denoted by
, is the following:
Note that
and when
,
reduces to
which is the standard logistic density. There is a vast amount of literature on this standard logistic model. The importance of this model arises from the fact that it has a thicker tail compared to that of the standard Gaussian density; thereby, large deviations have higher probabilities compared to the standard Gaussian.
The following notations will be used in this paper. Real scalar ( matrix) variables will be denoted by lowercase letters such as , whether the variables are random or mathematical variables. Capital letters such as will be used to denote vector ( or matrix)/matrix variables, whether mathematical or random and whether square matrices or rectangular matrices. Scalar constants will be denoted by , etc., and vector/matrix constants by , etc. A tilde will be placed over the variable to denote variables in the complex domain, whether mathematical or random, such as . No tilde will be used on constants. The above general rule will not be followed when symbols or Greek letters are used. These will be explained as needed. For a matrix , where the s are real scalar variables, the wedge product of differentials will be denoted as . For two real scalar variables x and y, the wedge product of differentials is defined as so that . When the matrix variable is in the complex domain, one can write , where are real matrices. Then, will be defined as . The transpose of a matrix Y will be denoted by . The complex conjugate transpose will be denoted by . When is such that , then is Hermitian and, in this case, , that is, is real symmetric and is real skew symmetric. The determinant of a square matrix A will be denoted as or as , and the absolute value of the determinant of A, when A is in the complex domain, will be denoted as are real scalar quantities. When a real symmetric matrix B is positive definite, it will be denoted as . If the matrix , then is Hermitian positive definite. Other notations will be explained wherever they occur.
Our aim here is to look into various extensions of the generalized logistic density in (2). First, let us consider a multivariate extension. Consider the
real vector
X,
, where
are functionally independent (distinct) real scalar variables,
. If
is in the complex domain, then
are real. Then,
Extensions of the logistic model will be considered by taking the argument
X as a vector or a matrix in the real and complex domains. In order to derive such models, we will require Jacobians of matrix transformations in three situations, and these will be listed here as lemmas without proofs. For the proofs and other details, see Ref. [
1].
Lemma 1. Let the matrix X of rank m be in the real domain with distinct elements s. Let the matrix , which is real positive definite. Then, going through a transformation involving a lower triangular matrix with positive diagonal elements and a semi-orthonormal matrix, and after integrating out the differential element corresponding to the semi-orthonormal matrix, we will have the following connection between and ; see the details from Ref. [1].where, for example, is the real matrix-variate gamma function given bywhere means the trace of the square matrix . We call the real matrix-variate gamma because it is associated with the real matrix-variate gamma integral. It is also known by different names in the literature. Similarly, is called complex matrix-variate gamma because it is associated with the complex matrix-variate gamma integral. When the matrix of rank m, with distinct elements, is in the complex domain, and letting , which is and Hermitian positive definite, then, going through a transformation involving a lower triangular matrix with real and positive diagonal elements and a semi-unitary matrix, and then integrating out the differential element corresponding to the semi-unitary matrix, we can establish the following connection between and :where, for example, is the complex matrix-variate gamma function given by This paper is organized as follows. An extended zeta function is introduced in
Section 2 and then several extended logistic models are discussed, which will all result in the extended zeta function.
Section 3 deals with some Bayesian-type models, which will also result in extended zeta functions.
Section 4 provides some concluding remarks.
3. Extended Zeta Function
Let
and
. The following convergent series is defined as the extended zeta function:
where, for example,
is the Pochhammer symbol,
or
.
First, we will consider a vector-variate random variable in the real and complex domains. Then, we will consider a general case of matrix-variate random variable in the real and complex domains. Associated with these random variables, we will construct a number of densities that can be taken as extended logistic densities to the vector/matrix-variate cases. The model and the evaluation of the corresponding normalizing constant will be stated as a theorem.
Theorem 1. Let X be a real vector and consider the following density where is the normalizing constant. Then, for the modelthe normalizing constant is given by the following: Proof. Since
and
, we have
and hence a binomial expansion is valid for the denominator in (5). That is,
Let
. Note that
is
and then Lemma 1 gives
and, from (5), (7), (8),
for
. The integral in the first line above is evaluated by using a real scalar variable gamma integral. Hence, when
is a density of
X, then
□
This establishes the theorem.
In order to avoid using too many numbers for the functions and equations, we will use the following simplified notation to denote results in the complex domain by appending a letter “c” to the function number. For example, the density in the complex domain, corresponding to , will be denoted by . With these notations, we will list the counterpart of Theorem 1 for the complex domain. Let be a vector in the complex domain with distinct complex scalar elements. Let the normalizing constant in the complex domain, corresponding to in the real domain, be denoted by . Then, we have the theorem in the complex domain, corresponding to Theorem 1, denoted by Theorem 2, as the following.
Theorem 2. Let be a vector random variable in the complex domain with distinct complex scalar elements. Let denote the conjugate transpose of . Consider the model Then, the normalizing constant is given by the following:for . Since the equation numbers are changed, the corresponding statements regarding the equation numbers are deleted. The deleted items are, “and section number of the equation numbers”, “and the equation number (6) will be written as (11)”.
Proof. Note that , where are real scalar variables. Then, there are real scalar variables in compared to m real scalar variables in the real case . Hence, from Lemma 1 in the complex case, we have for , which is again real. Then, following through the derivation in the real case, we have as given in (11). □
Note 2.1. Ref. [
2] deals with the analysis of Polarimetric Synthetic Aperture Radar (PolSAR) multi-look return signal data. The return signal cross-section has two components, called freckle and texture. Freckle is the dust-like contaminant on the cross-section and texture is the cross-section variable itself. The analysis is frequently done under the assumption that the freckle part has a complex Gaussian distribution. It is found that non-Gaussian models give better representations in certain uneven regions, such as urban areas, forests, and sea surfaces; see, for example, Refs. [
3,
4,
5,
6]. Various types of multivariate distributions in the complex domain, associated with complex Gaussian and complex Wishart, are used in the study of speckle (noise-like appearance in PolSAR images) parts and real positive scalar and positive definite matrix-variate distributions are used for the texture (spatial variation in the radar cross-section) part. In industrial applications, a logistic model is preferred compared to a standard Gaussian model, because, even though the graphs of both look alike, the tail is thicker in the logistic case compared to that in the standard Gaussian. Hence, multivariate and matrix-variate models, as extended logistic models in the real scalar case, may qualify to be more appropriate models in many areas, such as areas where logistic models are preferred. Analysts of single-look and multi-look sonar and radar data are likely to find extended logistic models better for their data compared to Gaussian and Wishart based models. These are the motivating factors in introducing extended logistic models, especially in the form of multivariate and matrix-variate distributions, in the complex domain. Moreover, the models in Theorems 1 and 2 can be generalized by replacing
in the real case by
, where
denotes the expected value of
and
is a
real positive definite constant matrix. In this case, the only change is that the normalizing constant will be multiplied by
. In the complex case, replace
by
, where
and
is a
constant Hermitian positive definite matrix. In this case, the only change is that the normalizing constant will be multiplied by
, i.e., the absolute value of the determinant of
A.
Note that the h-th moments of and , for an arbitrary h, are available from the respective normalizing constants in the extended logistic models by replacing with and then taking the ratios of the respective normalizing constants.
Now, we consider a slightly more general model. Again, let X be a real vector random variable. In Theorem 1, replace by , i.e., having an arbitrary exponent . Let us call the resulting model . Then, go through the same steps as in the proof of Theorem 1. That is, , etc. Now, set . Then, we will have the following final model, which is stated as a theorem.
Theorem 3. Let X be a real vector with distinct real scalar variables as elements. For , consider replacing in the exponent in Theorem 1 by . Then, we have the following model denoted by :wherefor is in the real domain. A more general form is to replace by . Then, the only change is that the normalizing constant will be multiplied by . Now, consider the corresponding replacement of by . Then, proceeding as in the real case, we have the corresponding resulting model in the complex domain, denoted by and given as the following.
Theorem 4. Let be a complex vector random variable with distinct scalar complex variables as elements. For , consider the replacement of in Theorem 2 by . Let the resulting model be denoted by . Then,wherefor is in the complex domain. A more general situation is to replace by , i.e., is a Hermitian positive definite constant matrix. As explained before, the only change is that the normalizing constant will be multiplied by , the absolute value of the determinant of A.
Now, we consider a more general case of a rectangular matrix-variate random variable in the real domain. Let X be a matrix of rank m in the real domain with distinct real scalar variables as elements. Let . Then, is and positive definite. Then, from Lemma 1, from the real version of Lemma 1. In the corresponding complex case, let be and of rank m. Let . Then, . Then, proceeding as in the derivation of Theorems 1 and 2, we have the following theorems.
Theorem 5. Let X be a matrix of rank m with distinct real scalar variables as elements. For denoting the trace of , consider the following model:Then, the normalizing constant is given by the following:for . The connection between
and
is given in Lemma 1. Then, following steps parallel to the derivation in Theorem 1, the result in (14) follows. This model can be extended by replacing
with
, where
is a
and
is a
real constant positive definite matrix,
and
denotes the positive definite square root of the positive definite matrix
. The only change is that the normalizing constant
will be multiplied by
. This is available from Lemma 2 given below. For the corresponding complex case, we have already given the connection between
and
. Then, the remaining procedure is parallel to the derivation in Theorem 2c.1 and hence the result is given here without proof [
1].
Lemma 2. Consider a matrix , where the s are functionally independent (distinct) real scalar variables, and let A be a and B be a nonsingular constant matrix. Then, When the matrix is in the complex domain and when A and B are and nonsingular constant matrices in the real or complex domain, thenwhere denotes the absolute value of the determinant of . Theorem 6. Let be a matrix of rank m in the complex domain with distinct complex scalar variables as elements. Consider the following model: Then, the normalizing constant is given by the following:for . An extension of the model is available by replacing by , where is a and is a constant Hermitian positive definite matrix, and denotes the Hermitian positive definite square root of . The only change is that the normalizing constant will be multiplied by from Lemma 2.
Again, we start with a real
matrix of rank
m with
distinct real scalar variables as elements. Then, we know that
is the sum of squares of the
real scalar variables in
X because, for any matrix
in the real domain,
. In the complex domain, the corresponding result is
, where
are real scalar quantities, and
. Thus, in the complex case, there will be twice the number of real variables compared to the corresponding real case. In the real case, the sum of squares of
quantities can be taken as coming from a form
, where
Z is a
real vector. Then, one can apply Lemma 1 on this real vector, and, in the real case, if
, then the connection between
and
in the real case and
and
in the complex domain is the following:
Now, proceeding as in the derivation of Theorem 1, we have the following results, which are stated as theorems.
Theorem 7. Let X be a matrix of rank m in the real domain with distinct real scalar variables as elements. For , consider the model Then, the normalizing constant is the following:for . A generalization can be made by replacing by with , as explained in connection with Theorem 5. Hence, further details are omitted. In the corresponding complex case, we have the following result.
Theorem 8. Let be a matrix in the complex domain of rank m and with distinct complex scalar variables as elements. Let . The connection between and is explained above. Consider the model Then, the normalizing constant is given by the following:for . A generalization is available by replacing by , as explained before.
Next, we consider a case where X is real positive definite, , and the corresponding case in the complex domain when the matrix is Hermitian positive definite, . In the real case, we consider the following model, which is stated as a theorem.
Theorem 9. Let X be real positive definite, . Then, for the modelthe normalizing constant is given byfor . Proof. Since
, we expand
Now, the integral to be evaluated can be obtained by using a real matrix-variate gamma integral of Lemma 1. That is,
Then, the summation over
k gives the following:
for
This makes the normalizing constant as given above, which establishes the result. □
In the corresponding complex domain, the steps are parallel and we integrate out by using a gamma integral in the complex domain as given in Lemma 1. Hence, we state the theorem in the complex domain without proof.
Theorem 10. Let be a matrix in the complex domain and let be Hermitian positive definite, i.e., . Then, for the following modelthe normalizing constant is the following:for . So far, we have considered generalized matrix-variate logistic models involving a rectangular matrix and exponential trace having an arbitrary power and, at the same time, a trace raised to a power enters the model as a product or the product factor is a determinant but the exponential trace has power one. The only remaining situation is that in which a rectangular matrix is involved, the exponential trace has an arbitrary power, and, at the same time, a factor containing a determinant and another factor containing a trace are present in the model. This is the most general form of a generalized matrix-variate logistic model. We will consider this situation next, for which we need a recent result from Ref. [
7], which is stated here as a lemma.
Lemma 3. Let X be a matrix in the real domain of rank m with distinct real scalar variables as elements. Then,for . Let be a matrix in the complex domain of rank m with distinct complex scalar variables as elements. Then,for . The proof of the above results in Lemma 3 being very lengthy and hence only the results are given in Lemma 3, without the derivation. The model in Lemma 3 in the real case, excluding the determinant factor, is often known in the literature as Kotz’s model. Some (Ref. [
8]) call the model in the real case, including the determinant factor, Kotz’s model also. Unfortunately, the normalizing constant traditionally used in such a model (Ref. [
8]) is found to be incorrect. The correct normalizing constant and the detailed steps in the derivation and the correct integration procedure are given in Ref. [
7]. The most general rectangular matrix-variate logistic density will be stated next as a theorem, where we take all parameters to be real.
Theorem 11. Let X be a matrix in the real domain of rank m and with distinct real scalar variables as elements. Then, for the modelthe normalizing constant is given byfor The above result follows directly from Lemma 3 in the real case. We state the corresponding result in the complex domain. Again, the result follows from the complex version of Lemma 3 and hence the theorem is stated without proof.
Theorem 12. Let be a matrix in the complex domain of rank m and with distinct complex scalar variables as elements. Then, for the following modelthe normalizing constant is given by the following:for Special cases of Theorems 11 and 12 are the cases
;
and general
;
and general
. It can be seen that the case
is the most difficult one in which to evaluate the normalizing constant. For the other situations, there are several other techniques available. The generalization of the above model in the real case is that in which
is replaced by
, where
and
is a
and
is a
constant positive definite matrix.
is the positive definite square root of
. In the complex domain, one can replace
by
, where
, and
are
and
constant Hermitian positive definite matrices.
is the Hermitian positive definite square root of the Hermitian positive definite matrix
. In these general cases, particular cases of interest are the following in the real domain:
. One can consider the corresponding special cases in the complex domain. In what we have taken, we have the same
for the numerator and denominator exponential traces. If we take different parameters
for the numerator and
for the denominator,
, then the problem becomes very difficult. In the scalar case, the integral to be evaluated is of the form
When
or vice versa, then the problem is a generalization of a Bessel integral (Ref. [
9]) or Krätzel integral in applied analysis or a reaction-rate probability integral in nuclear reaction rate theory. This can be handled through the Mellin convolution of a product and then taking the inverse Mellin transform, which will usually lead to a H-function for general parameters. When
, this situation can be handled through the Mellin convolution of a ratio and then taking the inverse Mellin transform, which will lead to a H-function for general parameters. In the matrix-variate case, when
or
, one can interpret it as the M-convolution of a product or as the density of a symmetric product of positive definite or Hermitian positive definite matrices. The unique inversion of the M-convolution of a product is not available and hence the explicit form of the density of a symmetric product of matrices is not available in the general situation. Similarly, when
, one can interpret it as the M-convolution of a ratio or as the density of a symmetric ratio of positive definite matrix-variate random variables. Again, the explicit form of the density is not available for general parameters. These are all open problems. For M-convolutions and symmetric product and symmetric ratios of matrices, see Ref. [
7].
9. Concluding Remarks
In this paper, we have introduced a number of matrix-variate models that can be considered as matrix-variate extensions of the basic scalar variable logistic models. Matrix-variate analogues, both in the real and complex domains, are considered. A few Bayesian situations are also discussed, where the prior distributions for the conditioned variable can be looked upon as extensions of the logistic model. One can consider other matrix-variate distributions for the conditional distributions and compatible prior distributions for the conditioned matrix variable. One challenging problem is to evaluate matrix-variate versions of Bessel integrals, Krätzel integrals, reaction rate probability integrals, integration involving an inverse Gaussian density in the matrix-variate case, etc., where an example is given in (36). These are open problems. Most generalized models are given in Theorems 11 and 12. By using these models, one can also look into Bayesian structures. For example, consider the evaluation of the normalizing constant
c in the following model, where
X is a
matrix of rank
m with distinct
real scalar variables as elements:
Let
. Then, from Lemma 1, we have
The integral in
can be evaluated by identifying it with the density of a product
, where
and
are statistically independently distributed real scalar random variables with the density functions
and
, respectively. Let
be the density of
u. Then,
One can take
and
with
, where
and
are normalizing constants. Now, one can identity
with
. Then, from the property
, due to statistical independence,
is available from the corresponding moments in
and
. Now, by inverting
using an inverse Mellin transform, one has the explicit form of
; details may be seen in [
7].
Instead of a in the exponent, one can have a or the exponential part may be of the form . Equation can be evaluated by identifying it with the density of a ratio so that and the Jacobian part is v. Then, with the same procedure as above, one has ; from here, by inversion, one has the corresponding explicit form for the density of .
Now, consider the evaluation of an integral of the type
coming from the corresponding matrix-variate case. Let
and
be
independently distributed real positive definite matrix-variate random variables. Let
be the symmetric product, with
. Then, the Jacobian part is
; see [
1]. Then, the density of
U, again denoted by
, will be of the form
If
and
are matrix-variate generalized gamma distributed, then we will need to evaluate an integral of the following form to obtain
:
for some
. Note that
even if
is a positive integer. When
A and
B commute, then one may be able to evaluate the integral in
in some situations, but commutativity is not a valid assumption in the above problem. The evaluation of the integral in
is a challenging problem in the matrix-variate case, both in the real and complex domains. If we consider a symmetric ratio
, corresponding to the scalar case ratio
, then also we will end up with a similar integral as in
. Such a symmetric product and symmetric ratio have practical relevance and they are also connected to fractional integrals, as illustrated in [
7].