Next Article in Journal
Telegraphic Transport Processes and Their Fractional Generalization: A Review and Some Extensions
Next Article in Special Issue
Bounds on the Lifetime Expectations of Series Systems with IFR Component Lifetimes
Previous Article in Journal
Mechanism Integrated Information
Previous Article in Special Issue
Results on Varextropy Measure of Random Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discrete Versions of Jensen–Fisher, Fisher and Bayes–Fisher Information Measures of Finite Mixture Distributions

by
Omid Kharazmi
1 and
Narayanaswamy Balakrishnan
2,*
1
Department of Statistics, Faculty of Mathematical Sciences, Vali-e-Asr University of Rafsanjan, P.O. Box 518, Rafsanjan, Iran
2
Department of Mathematics and Statistics, McMaster University, Hamilton, ON L8S 4K1, Canada
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(3), 363; https://doi.org/10.3390/e23030363
Submission received: 2 February 2021 / Revised: 7 March 2021 / Accepted: 16 March 2021 / Published: 18 March 2021
(This article belongs to the Special Issue Measures of Information)

Abstract

:
In this work, we first consider the discrete version of Fisher information measure and then propose Jensen–Fisher information, to develop some associated results. Next, we consider Fisher information and Bayes–Fisher information measures for mixing parameter vector of a finite mixture probability mass function and establish some results. We provide some connections between these measures with some known informational measures such as chi-square divergence, Shannon entropy, Kullback–Leibler, Jeffreys and Jensen–Shannon divergences.

1. Introduction

Over the last seven decades, several different criteria have been introduced in the literature for measuring uncertainty in a probabilistic model. Shannon entropy and Fisher information are the most important information measures that have been used rather extensively. Information theory started with Shannon entropy, introduced in the pioneering work of Shannon [1], based on a study of systems described by probability density (or mass) functions. About two decades earlier, Fisher [2] had proposed another information measure, describing the interior properties of a probabilistic model, that plays an important role in likelihood-based inferential methods. Fisher information and Shannon entropy are fundamental criteria in statistical inference, physics, thermodynamics and information theory. Complex systems can be described by means of their behavior (Shannon) and their architecture (Fisher) information. For more discussions, see Zegers [3] and Balakrishnan and Stepanov [4].
Let X be a discrete random variable with probability mass function (PMF) P = ( p 1 , , p n ) . Then, the Shannon entropy of random variable X is defined as
H ( X ) = H ( P ) = i = 1 n p i log p i ,
where “log” denotes the natural logarithm. For more details, see Shannon [1]. Following the work of Shannon [1], considerable attention has been paid to providing some extensions of Shannon entropy. Jensen–Shannon (JS) divergence is one such important extension of Shannon entropy that has been widely used; see Lin [5]. The Jensen–Shannon divergence between two probability mass functions P = ( p 1 , p 2 , , p n ) and Q = ( q 1 , q 2 , , q n ) , for 0 α 1 , is defined as
J S ( P , Q ; α ) = H ( α P + ( 1 α ) Q ) α H ( P ) ( 1 α ) H ( Q ) .
The JS divergence is a smoothed and symmetric version of the most important divergence measure of information theory, namely, Kullback–Leibler divergence. Recently, Jensen–Fisher (JF) and Jensen–Gini (JG) divergence measures have been introduced by Sánchez-Moreno et al. [6] and Mehrali et al. [7], respectively.
In the present paper, motivated by the idea of JS divergence, we consider discrete versions of Fisher information (DFI) and Fisher information distance (DFID), and then develop a new information measure associated with DFI measure. In addition, we provide some results for the Fisher information of a finite mixture probability mass function through a Bayesian perspective. The discrete Fisher information of a random variable X with PMF P = ( p 1 , p 2 , , p n ) is defined as
I ( P ) = i = 1 n p i + 1 p i 2 p i ,
with p n + 1 = 0 .
The Fisher information in (1) has been made use of in the processing of complex and stationary signals. For example, the discrete version of Fisher information has been used in detecting epileptic seizures in EEG signals recorded in humans and turtles, in detecting dynamical changes in many non-linear models such as logistic map and Lorenz model, and also in the analysis of geoelectrical signals; see Martin et al. [8], Ramírez-Pacheco et al. [9] and Ramírez-Pacheco et al. [10] for pertinent details.
The discrete Fisher information distance (DFID) between two probability mass functions P = ( p 1 , p 2 , , p n ) and Q = ( q 1 , q 2 , , q n ) is defined as
D ( P , Q ) = i = 1 n p i + 1 p i q i + 1 q i 2 p i ,
where, as above, p n + 1 = q n + 1 = 0 . For some of its properties, one may refer to Ramírez-Pacheco et al. [10] and Johnson [11].
With regard to informational properties of finite mixture models, one may refer to Contreras-Reyes and Cortés [12] and Abid et al. [13]. These authors have provided upper and lower bounds for Shannon and Rényi entropies of non-gaussian finite mixtures, skew-normal and skew-t distributions, respectively. Kolchinsky and Tracey [14] have studied the upper and lower bounds for the entropy of Gaussian mixture distributions using the Bhattacharyya and Kullback–Leibler divergences.
The first purpose of this paper is to propose Jensen–Fisher information for discrete random variables X 1 , , X n , with probability mass functions P 1 , , P n , respectively. For this purpose, we first define discrete version of Jensen–Fisher information for two PMFs P and Q , and then provide some results concerning this new information measure. Then, this idea is extended to the general case of PMFs P 1 , , P n .
The second purpose of this work is to study Fisher and Bayes–Fisher information measures for the mixing parameter of a finite mixture probability mass function. Let P 1 , , P n be n probability mass functions, where P j = ( p j 1 , , p j k ) . Then, a finite mixture probability mass function with mixing parameter vector θ = ( θ 1 , , θ n 1 ) , for n 2 , is given by P θ = ( p θ 1 , , p θ k ) , where
p θ j = 1 n 1 i = 1 n 1 θ i p i j + 1 i = 1 n 1 θ i n 1 p n j , j = 1 , , k ,
0 θ i 1 , i = 1 , , n 1 and i = 1 n 1 θ i 1 .
Let X and Y be two discrete random variables with PMFs P = ( p 1 , , p n ) and Q = ( q 1 , , q n ) , respectively. Then, the Kullback–Leibler (KL) distance between X and Y (or P and Q ) is defined as
K L ( X | | Y ) = K L ( P , Q ) = i = 1 n p i log p i q i .
The Kullback–Leibler discrimination between Y and X can be defined similarly. For more details, see Kullback and Leibler [15]. The chi-square divergence between PMFs P and Q is defined by
χ 2 ( P , Q ) = i = 1 n p i q i 2 p i .
For pertinent details, see Broniatowski [16] and Cover and Thomas [17].
The rest of this paper is organized as follows. In Section 2, we first consider discrete version of Fisher information and then propose the discrete Jensen–Fisher information (DJFI) measure. We show that DJFI measure can be represented based on the mixture of discrete Fisher information distance measures. In Section 3, we consider a finite mixture probability mass function and establish some results for the Fisher information measure of the mixing parameter vector. We show that the Fisher information of the mixing parameter vector is connected to chi-square divergence. Next, in Section 4, we discuss the Bayes–Fisher information for the mixing parameter vector of probability mass functions under some prior distributions for the mixing parameter. We then show that this measure is connected to Shannon entropy, Jensen–Shannon entropy, Kullback–Leibler and Jeffreys divergence measures. Finally, we present some concluding remarks in Section 5.

2. Discrete Version of Jensen-Fisher Information

In this section, we first give a result for the DFI measure based on the log-convex and log-concave property of the probability mass function. Then, we define the discrete Jensen–Fisher information measure, and establish some interesting properties of it.
Theorem 1.
Let P = ( p 1 , p 2 , , p n ) be a probability mass function.
(i) 
If P is log-concave, then I ( P ) p 1 ;
(ii) 
If P is log-convex, then I ( P ) p 1 .
Proof. 
P is log-convex (log-concave) if p i 2 ( ) p i 1 p i + 1 i . So, from the definition of DFI in (1), we have
I ( P ) = i = 1 n p i + 1 p i 2 p i ( ) p 1 .
  □

2.1. Discrete Jensen–Fisher Information Based on Two Probability Mass Functions P and Q

We first define a symmetric version of DFID measure in (2), and then propose the discrete Jensen–Fisher information measure involving two probability mass functions.
Definition 1.
Let P and Q be two probability mass functions given by P = ( p 1 , p 2 , , p n ) and Q = ( q 1 , q 2 , , q n ) . Then, a symmetric version of discrete Fisher information distance in (2) is defined as
SD ( P , Q ) = 1 2 D P , P + Q 2 + 1 2 D Q , P + Q 2 .
Definition 2.
Let P and Q be two probability mass functions given by P = ( p 1 , p 2 , , p n ) and Q = ( q 1 , q 2 , , q n ) . Then, the discrete Jensen–Fisher information is defined as
JFI ( P , Q ) = I ( P ) + I ( Q ) 2 I P + Q 2 .
In the following theorem, we show that the discrete Jensen–Fisher information measure can be obtained based on mixtures of Fisher information distances.
Theorem 2.
Let P and Q be two probability mass functions given by P = ( p 1 , p 2 , , p n ) and Q = ( q 1 , q 2 , , q n ) . Then,
JFI ( P , Q ) = 1 2 D P , P + Q 2 + 1 2 D Q , P + Q 2 = SD ( P , Q ) .
Proof. 
From the definition of DFID in (2), we get
D P , P + Q 2 = i = 1 n p i + 1 p i p i + 1 + q i + 1 p i + q i 2 p i = i = 1 n p i + 1 p i 1 p i + 1 + q i + 1 p i + q i 1 2 p i = i = 1 n p i + 1 p i 1 2 p i 2 i = 1 n p i + 1 p i 1 p i + 1 + q i + 1 p i + q i 1 p i + i = 1 n p i + 1 + q i + 1 p i + q i 1 2 p i = i = 1 n p i + 1 p i 2 p i 2 i = 1 n ( p i + 1 p i ) p i + 1 + q i + 1 ( p i + q i ) p i + q i + i = 1 n p i + 1 + q i + 1 ( p i + q i ) 2 ( p i + q i ) 2 p i .
In a similar way, we get
D Q , P + Q 2 = i = 1 n q i + 1 q i 2 q i 2 i = 1 n ( q i + 1 q i ) ( p i + 1 + q i + 1 ( p i + q i ) ) p i + q i + i = 1 n ( p i + 1 + q i + 1 ( p i + q i ) ) 2 ( p i + q i ) 2 q i .
Upon adding the above two expressions, we obtain
D P , P + Q 2 + D Q , P + Q 2 = i = 1 n p i + 1 p i 2 p i + i = 1 n q i + 1 q i 2 q i i = 1 n ( p i + 1 + q i + 1 ( p i + q i ) ) 2 p i + q i = I ( P ) + I ( Q ) 2 I P + Q 2 = 2 JFI ( P , Q ) ,
as required.  □
Example 1.
Let
X = 1 , w i t h p r o b a b i l i t y p , 0 , w i t h p r o b a b i l i t y 1 p ,
and
Y = 1 , w i t h p r o b a b i l i t y q , 0 , w i t h p r o b a b i l i t y 1 q .
The corresponding PMFs of variables X and Y are given by P = ( p , 1 p ) and Q = ( q , 1 q ) , respectively. From Theorem 2, we then have
JFI ( P , Q ) = p + q 2 1 p p 1 q q 2 .
A 3D-plot of this JFI ( P , Q ) is presented in Figure 1.

2.2. Discrete Jensen–Fisher Information Based on n Probability Mass Functions P 1 , , P n

Let P 1 , , P n be n probability mass functions, where P i = ( p i 1 , , p i k ) . In the following definition, we extend the discrete Jensen–Fisher information measure in (4) to the case of n probability mass functions.
Definition 3.
Let P 1 , , P n be n probability mass functions given by P i = ( p i 1 , p i 2 , , p i k ) , i = 1 , 2 , , n , with j = 1 k p i j = 1 , and α 1 , , α n be non-negative real numbers such that i = 1 n α i = 1 . Then, the discrete Jensen–Fisher information (DJFI) based on the n probability mass functions is defined as
JFI ( P 1 , , P n ; α ̲ ) = i = 1 n α i I ( P i ) I i = 1 n α i P i = i = 1 n α i j = 1 k ( p i j + 1 p i j ) 2 p i j j = 1 k ( i = 1 n α i p i j + 1 i = 1 n α i p i j ) 2 i = 1 n α i p i j ,
where α ̲ = ( α 1 , , α n ) .
Theorem 3.
Let P 1 , , P n be n probability mass functions given by P i = ( p i 1 , p i 2 , , p i k ) , i = 1 , 2 , , n , and j = 1 k p i j = 1 . Then, the DJFI measure can be expressed as a mixture of DFID measures in (2) as follows:
JFI ( P 1 , , P n , α ̲ ) = i = 1 n α i D ( P i , P T ) ,
where P T = i = 1 n α i P i is the weighted PMF.
Proof. 
From the definition in (5), we get
i = 1 n α i D ( P i , P T ) = i = 1 n α i j = 1 k p i j + 1 p i j i = 1 n α i p i j + 1 i = 1 n α i p i j 2 p i j = i = 1 n α i j = 1 k p i j + 1 p i j 1 i = 1 n α i p i j + 1 i = 1 n α i p i j 1 2 p i j = i = 1 n α i j = 1 k ( p i j + 1 p i j ) 2 p i j 2 j = 1 k ( i = 1 n α i p i j + 1 i = 1 n α i p i j ) 2 i = 1 n α i p i j + j = 1 k ( i = 1 n α i p i j + 1 i = 1 n α i p i j ) 2 i = 1 n α i p i j = i = 1 n α i j = 1 k ( p i j + 1 p i j ) 2 p i j j = 1 k ( i = 1 n α i p i j + 1 i = 1 n α i p i j ) 2 i = 1 n α i p i j = i = 1 n α i I ( P i ) I i = 1 n α i P i = JFI ( P 1 , , P n , α ̲ ) ,
as required. □

3. Fisher Information of a Finite Mixture Probability Mass Function

In this section, we discuss Fisher information for parameter θ of a finite mixture probability mass function.
Theorem 4.
The Fisher information of PMF in (3) about parameter θ i , i = 1 , , n 1 , is given by
I ( θ i ) = 1 θ i ( n 1 ) 2 χ 2 ( P θ i , P θ ) , i = 1 , , n 1 ,
where P θ i = ( p θ i 1 , , p θ i k ) ,
p θ i j = n 2 n 1 p i j + 1 n 1 ł = 1 , ł i n 1 θ ł p ł j + 1 n 1 1 ł = 1 , ł i n 1 θ ł p n j , j = 1 , , k ,
and θ i = ( θ 1 , , θ i 1 , θ i + 1 , , θ n 1 ) .
Proof. 
From the definition of Fisher information in (1) and for i = 1 , , n 1 , we have
I ( θ i ) = j = 1 k log P θ θ i 2 p θ j = 1 n 1 2 j = 1 k p i j p n j 2 p θ j 2 p θ j = 1 θ i ( n 2 ) 2 j = 1 k ( p θ i j p θ j ) 2 p θ j = 1 θ i ( n 1 ) 2 χ 2 ( P θ i , P θ ) , i = 1 , , n 1 ,
where the third equation follows from the fact that, for i = 1 , , n 1 ,
p i j p n j = n 1 θ i ( n 2 ) p θ j p θ i j .
  □

4. Bayes–Fisher Information of a Finite Mixture Probability Mass Function

In this section, we discuss Bayes–Fisher information for the mixing parameter vector θ of the finite mixture probability mass function in (3) under some prior distributions for the mixing parameter vector. We now introduce two notations that will be used in the sequel. Consider the parameter vector θ = ( θ 1 , , θ n 1 ) , and then define ( 0 i , θ ) = ( θ 1 , , θ i 1 , 0 , θ i + 1 , , θ n 1 ) and ( 1 i , θ ) = ( θ 1 , , θ i 1 , 1 , θ i + 1 , , θ n 1 ) .
Theorem 5.
The Bayes–Fisher information for parameter θ i , i = 1 , , n 1 , of the finite mixture PMF in (3), under the uniform prior on [ 0 , 1 ] , is given by
I ˜ ( θ i ) = K L P ( 1 i , θ ) , P ( 0 i , θ ) + K L P ( 0 i , θ ) , P ( 1 i , θ ) = J P ( 0 i , θ ) , P ( 1 i , θ ) ,
where P ( 1 i , θ ) = ( p ( 1 i , θ ) 1 , , p ( 1 i , θ ) n ) , with
p ( 1 i , θ ) j = 1 n 1 p i j + 1 n 1 ł = 1 , ł i n 1 θ ł + 1 1 n 1 1 + ł = 1 , ł i n 1 θ ł p n j ,
and P ( 0 i , θ ) = ( p ( 0 i , θ ) 1 , , p ( 0 i , θ ) n ) , with
p ( 0 i , θ ) j = 1 n 1 ł = 1 , ł i n 1 θ ł p ł j + 1 1 n 1 ł = 1 , ł i n 1 θ ł p n j ,
and J corresponds to Jeffreys’ divergence.
Proof. 
By definition and from (7), for i = 1 , , n 1 , we have
I ˜ ( θ i ) = E [ I ( Θ i ) ] = 1 n 1 2 0 1 j = 1 k p i j p n j 2 p θ j d θ i = 1 n 1 j = 1 k p i j p n j 0 1 1 n 1 p i j p n j p θ j d θ i = 1 n 1 j = 1 k p i j p n j log p θ j | 0 1 .
On the other hand, we have
p ( 1 i , θ ) p ( 0 i , θ ) = 1 n 1 p i j p n j .
Hence, upon substituting (11) into (10), we obtain
I ˜ ( θ i ) = 1 n 1 j = 1 k p i j p n j log p ( 1 i , θ ) j p ( 0 i , θ ) j = j = 1 k p ( 1 i , θ ) p ( 0 i , θ ) log p ( 1 i , θ ) j p ( 0 i , θ ) j = K L P ( 1 i , θ ) , P ( 0 i , θ ) + K L P ( 0 i , θ ) , P ( 1 i , θ ) = J P ( 0 i , θ ) , P ( 1 i , θ ) ,
as required.  □
Theorem 6.
For the mixture model with PMF in (3), we have the following:
(i) 
The Bayes–Fisher information for θ i , i = 1 , , n 1 , under B e t a ( 2 , 1 ) prior with PMF π ( θ i ) = 2 θ i , θ i [ 0 , 1 ] , is
I ˜ ( θ i ) = 2 K L ( P ( 0 i , θ ) , P ( 1 i , θ ) ) , i = 1 , , n 1 ;
(ii) 
The Bayes-Fisher information for parameter θ i , i = 1 , , n 1 , under B e t a ( 1 , 2 ) prior with PMF π ( θ i ) = 2 ( 1 θ i ) , θ i [ 0 , 1 ] , is
I ˜ ( θ i ) = 2 K L ( P ( 1 i , θ ) , P ( 0 i , θ ) ) , i = 1 , , n 1 .
Proof. 
By definition, and from (7), for i = 1 , , n 1 , we have
I ˜ ( θ i ) = E [ I ( Θ i ) ] = 1 n 1 2 0 1 j = 1 k p i j p n j 2 p θ j π ( θ i ) d θ i = 2 n 1 j = 1 k p i j p n j 0 1 θ i n 1 p i j p n j p θ j d θ i = 2 n 1 j = 1 k p i j p n j 0 1 p θ j p ( 0 i , θ ) j p θ ( x ) d θ i = 2 n 1 j = 1 k p i j p n j 1 ( n 1 ) p ( 0 i , θ ) j p i j p n j log p θ j | 0 1 = 2 j = 1 k p ( 0 i , θ ) j log p ( 1 i , θ ) j p ( 0 i , θ ) j = 2 j = 1 k p ( 0 i , θ ) j log p ( 0 i , θ ) j p ( 1 i , θ ) j = 2 K L ( P ( 0 i , θ ) , P ( 1 i , θ ) ) ,
as required for Part (i). Part (ii) can be proved in an analogous manner.
Let us now consider the following general triangular prior for the parameter θ i , i = 1 , , n 1 :
π α ( θ i ) = 2 θ i α , 0 < θ i α , 2 ( 1 θ i ) 1 α , α θ i < 1 ,
for some α ( 0 , 1 ) .   □
Theorem 7.
The Bayes–Fisher information for parameter θ i , i = 1 , , n 1 , with the general triangular prior with density π α ( θ i ) in (12), is given by
I ˜ ( θ i ) = 2 α ( 1 α ) α K L ( P ( 1 i , θ ) , P α ) + ( 1 α ) K L ( P ( 0 i , θ ) , P α ) = 2 α ( 1 α ) J S P ( 0 i , θ ) , P ( 1 i , θ ) ; α ,
where P α = ( p α 1 , , p α k ) is a finite mixture PMF, with
p α j = α n 1 p i j + 1 n 1 l = 1 , l i n 1 θ l p l j + 1 1 n 1 α + l = 1 , l i n 1 θ l p n j
and P ( 1 i , θ ) and P ( 0 i , θ ) are as defined in (8) and (9), respectively.
Proof. 
From the assumptions made, for i = 1 , , n 1 , we have
I ˜ ( θ i ) = E I ( Θ i ) = 0 α I ( θ i ) π α d θ i + α 1 I ( θ i ) π α d θ i = 2 ( n 1 ) α j = 1 k ( p i j p n j ) 0 α θ i n 1 p i j p n j p θ j d θ i + 2 ( n 1 ) ( 1 α ) j = 1 k ( p i j p n j ) α 1 1 θ i n 1 ( p i j p n j ) p θ j d θ i = 2 ( n 1 ) α j = 1 k ( p i j p n j ) 0 α 1 p ( 0 i , θ ) j p θ j d θ i 2 ( n 1 ) ( 1 α ) j = 1 k p i j p n j 0 α 1 p ( 1 i , θ ) j p θ j d θ i = 2 α j = 1 k p ( 0 i , θ ) j log p α j p ( 0 i , θ ) j + 2 1 α j = 1 k p ( 1 i , θ ) j log p ( 1 i , θ ) j p α j = 2 α ( 1 α ) α K L ( P ( 1 i , θ ) , P α ) + ( 1 α ) K L ( P ( 0 i , θ ) , P α ) = 2 α ( 1 α ) J S P ( 0 i , θ ) , P ( 1 i , θ ) ; α ,
as required.  □

5. Concluding Remarks

In this paper, we have introduced the discrete version of Jensen–Fisher information measure, and have shown that this information measure can be expressed as a mixture of discrete Fisher information distance measures. Further, we have considered a finite mixture probability mass function and have derived Fisher information and Bayes–Fisher information for the mixing parameter vector. We have shown that the Fisher information for the mixing parameter is connected to chi-square divergence. We have also studied the Bayes–Fisher information for the mixing parameter of a finite mixture model under some prior distributions. These results have provided connections between the Bayes–Fisher information and some known informational measures such as Shannon entropy, Kullback–Leibler, Jeffreys and Jensen–Shannon divergence measures.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Fisher, R.A. Tests of significance in harmonic analysis. Proc. R. Soc. Lond. A Math. Phys. Sci. 1929, 125, 54–59. [Google Scholar]
  3. Zegers, P. Fisher information properties. Entropy 2015, 17, 4918–4939. [Google Scholar] [CrossRef] [Green Version]
  4. Balakrishnan, N.; Stepanov, A. On the Fisher information in record data. Stat. Probab. Lett. 2006, 76, 537–545. [Google Scholar] [CrossRef]
  5. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef] [Green Version]
  6. Sánchez-Moreno, P.; Zarzo, A.; Dehesa, J.S. Jensen divergence based on Fisher’s information. J. Phys. A Math. Theor. 2012, 45, 125305. [Google Scholar] [CrossRef] [Green Version]
  7. Mehrali, Y.; Asadi, M.; Kharazmi, O. A Jensen-Gini measure of divergence with application in parameter estimation. Metron 2018, 76, 115–131. [Google Scholar] [CrossRef]
  8. Martin, M.T.; Pennini, F.; Plastino, A. Fisher’s information and the analysis of complex signals. Phys. Lett. A 1999, 256, 173–180. [Google Scholar] [CrossRef]
  9. Ramírez-Pacheco, J.; Torres-Román, D.; Rizo-Dominguez, L.; Trejo-Sanchez, J.; Manzano-Pinzón, F. Wavelet Fisher’s information measure of 1/fα signals. Entropy 2011, 13, 1648–1663. [Google Scholar] [CrossRef]
  10. Ramírez-Pacheco, J.; Torres-Román, D.; Argaez-Xool, J.; Rizo-Dominguez, L.; Trejo-Sanchez, J.; Manzano-Pinzón, F. Wavelet q-Fisher information for scaling signal analysis. Entropy 2012, 14, 1478–1500. [Google Scholar] [CrossRef] [Green Version]
  11. Johnson, O. Information Theory and the Central Limit Theorem; World Scientific Publishers: Singapore, 2004. [Google Scholar]
  12. Contreras-Reyes, J.E.; Cortés, D.D. Bounds on Rényi and Shannon entropies for finite mixtures of multivariate skew-normal distributions: Application to swordfish (Xiphias gladius linnaeus). Entropy 2017, 18, 382. [Google Scholar] [CrossRef]
  13. Abid, S.H.; Quaez, U.J.; Contreras-Reyes, J.E. An information-theoretic approach for multivariate skew-t distributions and applications. Mathematics 2021, 9, 146. [Google Scholar] [CrossRef]
  14. Kolchinsky, A.; Tracey, B.D. Estimating mixture entropy with pairwise distances. Entropy 2017, 19, 361. [Google Scholar] [CrossRef] [Green Version]
  15. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  16. Broniatowski, M. Minimum divergence estimators, Maximum likelihood and the generalized bootstrap. Entropy 2021, 23, 185. [Google Scholar] [CrossRef] [PubMed]
  17. Cover, T.; Thomas, J. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
Figure 1. 3D-plot of the DJFI divergence between the PMFs P = ( p , 1 p ) and Q = ( q , 1 q ) .
Figure 1. 3D-plot of the DJFI divergence between the PMFs P = ( p , 1 p ) and Q = ( q , 1 q ) .
Entropy 23 00363 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kharazmi, O.; Balakrishnan, N. Discrete Versions of Jensen–Fisher, Fisher and Bayes–Fisher Information Measures of Finite Mixture Distributions. Entropy 2021, 23, 363. https://doi.org/10.3390/e23030363

AMA Style

Kharazmi O, Balakrishnan N. Discrete Versions of Jensen–Fisher, Fisher and Bayes–Fisher Information Measures of Finite Mixture Distributions. Entropy. 2021; 23(3):363. https://doi.org/10.3390/e23030363

Chicago/Turabian Style

Kharazmi, Omid, and Narayanaswamy Balakrishnan. 2021. "Discrete Versions of Jensen–Fisher, Fisher and Bayes–Fisher Information Measures of Finite Mixture Distributions" Entropy 23, no. 3: 363. https://doi.org/10.3390/e23030363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop