Next Article in Journal
Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations
Next Article in Special Issue
Is Cetacean Intelligence Special? New Perspectives on the Debate
Previous Article in Journal
The Potential Application of Multiscale Entropy Analysis of Electroencephalography in Children with Neurological and Neuropsychiatric Disorders
Previous Article in Special Issue
Quality Systems. A Thermodynamics-Related Interpretive Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Logical Entropy and Logical Mutual Information of Experiments in the Intuitionistic Fuzzy Case

by
Dagmar Markechová
1,* and
Beloslav Riečan
2,3
1
Department of Mathematics, Faculty of Natural Sciences, Constantine the Philosopher University in Nitra, A. Hlinku 1, SK-949 01 Nitra, Slovakia
2
Department of Mathematics, Faculty of Natural Sciences, Matej Bel University, Tajovského 40, SK-974 01 Banská Bystrica, Slovakia
3
Mathematical Institute, Slovak Academy of Sciences, Štefánikova 49, SK-814 73 Bratislava, Slovakia
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(8), 429; https://doi.org/10.3390/e19080429
Submission received: 4 July 2017 / Revised: 6 August 2017 / Accepted: 17 August 2017 / Published: 21 August 2017
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)

Abstract

:
In this contribution, we introduce the concepts of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case, and study the basic properties of the suggested measures. Subsequently, by means of the suggested notion of logical entropy of an IF-partition, we define the logical entropy of an IF-dynamical system. It is shown that the logical entropy of IF-dynamical systems is invariant under isomorphism. Finally, an analogy of the Kolmogorov–Sinai theorem on generators for IF-dynamical systems is proved.

1. Introduction

The notions of entropy and mutual information are basic notions in information theory [1] and, as is known, the customary approach is based on Shannon’s entropy [2]. Let P = ( p 1 , , p n ) n be a probability distribution; Shannon’s entropy of P is the number H s ( P ) = i = 1 n s ( p i ) , where s : [ 0 ,   1 ] [ 0 ,   ) is the Shannon function defined by s ( x ) = x log x , for every x [ 0 ,   1 ] . Remark that it used the convention (based on continuity arguments) that 0 log 0 = 0 . The idea of Shannon’s entropy was generalized in a natural way to the Kolmogorov–Sinai entropy h ( T ) of dynamical systems [3,4,5], which allows dynamical systems to be distinguished. Kolmogorov and Sinai applied the entropy h ( T ) to prove that non-isomorphic Bernoulli shifts exist. Of course, the theory of Kolmogorov–Sinai entropy has many other important applications. For this reason, various proposals were made to generalize the Kolmogorov–Sinai entropy concept. In [6], we generalized the Kolmogorov–Sinai entropy concept to the case of a fuzzy probability space [7]. This structure represents an alternative mathematical model of probability theory for the situations when the considered events are fuzzy events, i.e., events described unclearly, vaguely. Further proposals for fuzzy generalizations of Shannon’s and Kolmogorov–Sinai entropy are presented e.g., in [8,9,10,11,12,13,14,15,16,17]. It is known that there exist many ways to define operations for modeling the union and intersection of fuzzy sets; an overview was listed in [18]. We remark that while the model studied in [6] was based on Zadeh’s fuzzy set operations [19], in our study [14], the Lukasiewicz fuzzy set operations were used.
Since its inception in 1965, the fuzzy set theory has been continually developing, and it has been shown to be useful in many disciplines. It has been applied to many mathematical areas, such as algebra, analysis, clustering, graph theory, measure theory, probability theory, control theory, optimization, topology, and so on. Currently, algebraic structures based on fuzzy set theory, such as MV-algebras [20,21,22,23,24,25,26,27,28], D-posets [29,30,31], effect algebras [32,33], and A-posets [34,35,36], are intensively studied. There are also interesting results about the Kolmogorov type entropy on these structures; some of them can be found, e.g., in [37,38,39,40,41,42,43]. Moreover, the fuzzy set theory also has significant practical applications; applications of this theory can be found, for example, in control engineering, data processing, management, logistics, artificial intelligence, computer science, medicine, decision theory, expert systems, logic, management science, operations research, pattern recognition, and robotics.
In 1983, Atanassov introduced a more general fuzzy theory—intuitionistic fuzzy sets theory [44,45,46]. Recall that while a fuzzy set is a mapping μ A :   Ω [ 0 ,   1 ] (where the considered fuzzy set is identified with its membership function μ A ), the intuitionistic fuzzy set (shortly, IF-set) is a pair A = ( μ A ,   ν A ) :   Ω [ 0 ,   1 ] × [ 0 ,   1 ] of fuzzy sets for which the condition μ A ( ω ) +   ν A ( ω ) 1 , for every ω Ω , is satisfied. The function μ A is called the membership function of A, the function ν A is called the non-membership function of A. Evidently, each fuzzy set μ A can be regarded as an IF-set A = ( μ A ,   1 μ A ) . Each result that applies to IF-sets also applies to the case of fuzzy sets. Of course, the opposite implication is not valid, e.g., the representation theorem of IF-states does not follow by the corresponding result for fuzzy states. The theory of IF-sets represents a non-trivial generalization of the fuzzy set theory; thus, the IF-sets provide opportunities to model a larger class of real situations. We remark that a probability theory on intuitionistic fuzzy events has been elaborated in [47], see also [48]. Some results about the Kolmogorov type entropy for the case of intuitionistic fuzzy sets are given e.g., in [49,50,51,52,53].
When solving some specific problems, instead of Shannon’s entropy it is more appropriate to use an approach based on the concept of logical entropy [54,55,56,57]. If P = ( p 1 , , p n ) n is a probability distribution, then the logical entropy of P is defined by the formula H ( P ) = i = 1 n p i ( 1 p i ) . In [57], historical aspects of the logical entropy formula H ( P ) are discussed and the relationship between logical entropy and Shannon’s entropy is examined. The concepts of logical conditional entropy and the logical mutual information have been introduced as well. We note that some results about the logical entropy on some of the above mentioned algebraic structures, based on fuzzy set theory, can be found e.g., in [58,59,60,61,62].
The purpose of the present work is to study the logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case. The paper is organized in the following way. In the following section, basic definitions and notations are provided. In Section 3, the concept of logical entropy for the case of intuitionistic fuzzy experiments is introduced, and basic properties of the proposed measure are shown. In Section 4, we introduce the concepts of logical mutual information and conditional mutual information of intuitionistic fuzzy experiments and derive some properties of these measures. In Section 5, using the suggested concept of logical entropy, we define the logical entropy of IF-dynamical systems. It is shown that the logical entropy of IF-dynamical systems is invariant under isomorphism. Finally, an analogy of the Kolmogorov–Sinai theorem on generators for IF-dynamical systems is proved. Section 6 contains a brief summary.

2. Basic Definitions, Notations and Facts

In this section, we provide basic definitions, notations and facts that will be used throughout the contribution.
Definition 1.
By an IF-event we will understand a pair A = ( μ A ,   ν A ) of functions μ A ,   ν A :   Ω [ 0 ,   1 ] with the property μ A ( ω ) +   ν A ( ω ) 1 , for every ω Ω .
In the following, we will use the symbol F to denote the family of all IF-events. Analogously as in the fuzzy case, there are many possibilities to define operations for modeling the union and intersection of IF-sets (see e.g., [63,64,65]). We will use the operations , and defined as follows. In the family F we define the partial binary operation in the following way: if A = ( μ A ,   ν A ) , and B = ( μ B ,   ν B ) are two IF-events, then A B = ( μ A + μ B ,     ν A + ν B 1 Ω ) . Here, 1 Ω denotes the function defined by 1 Ω ( ω ) = 1 , for every ω Ω   . Similarly, we denote by 0 Ω the function defined by 0 Ω ( ω ) = 0 , for every ω Ω   . Evidently, if A ,   B F , then A B exists if and only if μ A + μ B 1 Ω , and ν A + ν B 1 Ω . The zero element of operation is the IF-event 0 = ( 0 Ω ,   1 Ω ) . Indeed, A 0 = ( μ A ,   ν A )     ( 0 Ω ,   1 Ω ) = ( μ A ,   ν A ) = A , for any A F . Further, in the family F , we define the binary operation in the following way: if A = ( μ A ,   ν A ) , and B = ( μ B ,   ν B ) , then A B = ( μ A μ B ,   1 Ω ( 1 Ω ν A ) ( 1 Ω ν B ) ) = ( μ A μ B ,   ν A + ν B ν A ν B ) . Put 1 = ( 1 Ω ,   0 Ω ) . Evidently, A 1 = A , for any A F . The IF-event 0 = ( 0 Ω ,   1 Ω ) is interpreted as an impossible event; the IF- event 1 = ( 1 Ω ,   0 Ω ) as a certain event. It can easily be verified that, for any A ,   B ,   C F , the following conditions are satisfied:
(F1)
A B = B A if one side is defined in F (commutativity);
(F2)
( A B ) C = A ( B C ) if one side is defined in F (associativity);
(F3)
if A B exists, then C A C B exists, and C ( A B ) =   C A C B .
Since in the fuzzy case the inequality μ A μ B implies ν A = 1 Ω μ A 1 Ω μ B = ν B , in the family F it is natural to define the relation as follows: if A = ( μ A ,   ν A ) , and B = ( μ B ,   ν B ) are two IF-events, then A B if and only if μ A μ B , and ν A ν B . The relation is a partial order such that 0 A 1 for all A F . Gutierrez Garcia and Rodabaugh have proved that intuitionistic fuzzy sets ordering and topology are reduced to the ordering and topology of fuzzy sets [66]. Another situation is in measure theory, where the intuitionistic fuzzy case cannot be reduced to the fuzzy case.
Definition 2.
A map m : F [ 0 , 1 ] is said to be a state if the following conditions are satisfied:
(i)
m ( A B ) = m ( A ) + m ( B ) , whenever A B is defined in F ;
(ii)
m ( 1 ) = 1 .
Example 1.
Consider a probability space ( Ω , S , P ) , and put
F = { A = ( μ A ,   ν A ) ;         μ A ,     ν A :   Ω [ 0 ,   1 ]     a r e     S m e a s u r a b l e     w i t h     μ A +   ν A 1 Ω } .
It is easy to verify that the mapping m : F [ 0 ,   1 ] defined, for any element A = ( μ A ,   ν A ) of F , by the formula:
m ( A ) = Ω μ A d P   + α   ( 1 Ω ( μ A +   ν A ) d P ) ,   α [ 0 ,   1 ] ,
is a state. Namely, for every A ,   B F such that A B exists, we have:
m ( A B ) = Ω ( μ A +     μ B ) d P     + α   ( 1 Ω ( μ A +   μ B +   ν A + ν B 1 Ω ) d P   ) = Ω ( μ A +     μ B ) d P     + α   ( 2 Ω ( μ A +   μ B +   ν A + ν B ) d P   ) = Ω μ A d P   +   α   ( 1 Ω ( μ A +   ν A ) d P   )   + Ω   μ B d P + α   ( 1 Ω ( μ B +   ν B ) d P   )   = m ( A ) + m ( B ) ,
and
m ( 1 ) = m ( ( 1 Ω ,       0 Ω ) ) = Ω d P   +   α   ( 1 Ω d P   ) = 1 + α ( 1 1 ) = 1 .
Remark 1.
Riečan and Ciungu have shown in [67] that any continuous state m defined on a family F of all S-measurable IF-events has the form (1). In more detail, if a state m defined on a family F of all S-measurable IF-events is continuous (i.e., A n A implies m ( A n ) m ( A ) ), then there exist exactly one probability measure P :   S [ 0 ,   1 ] , and exactly one α [ 0 ,   1 ] such that:
m ( A ) = Ω μ A d P   + α   ( 1 Ω ( μ A +   ν A ) d P   ) , f o r   a n y   A = ( μ A ,   ν A )   o f   F .
Definition 3.
By an IF-partition of F , we will understand a finite collection ξ = { A 1 , , A n } of elements of F such that i = 1 n A i exists, and m ( i = 1 n A i ) = i = 1 n m ( A i ) = 1 .
Remark 2.
A classical probability space ( Ω , S , P ) can be regarded as a family of IF-events, if we put F = { ( χ E ,   1 Ω χ E ) ;   E S } , where χ E is the characteristic function of a set E S ; the mapping m : F [ 0 ,   1 ] defined by m ( ( χ E ,     1 Ω χ E ) ) = P ( E ) , is a state on F . A usual measurable partition { E 1 , , E n } of a space ( Ω ,   S , P ) (i.e., any sequence { E 1 , , E n } S such that i = 1 n E i = Ω and E i E j = Ø ( i j ) ) can be regarded as an IF-partition, if we consider ( χ E i ,       1 Ω χ E i ) instead of E i . Namely, E i E j = Ø ( i j ) implies i = 1 n χ E i ( ω ) 1 , for every ω Ω , and hence i = 1 n ( χ E i ,     1 Ω χ E i ) exists. Moreover, we have:
m ( i = 1 n ( χ E i ,   1 Ω χ E i ) ) = m ( ( i = 1 n χ E i ,   1 Ω i = 1 n χ E i ) ) = m ( ( χ i = 1 n E i ,   1 Ω χ i = 1 n E i ) ) = P ( i = 1 n E i ) = P ( Ω ) = 1 ,
and the equality P ( i = 1 n E i ) = i = 1 n P ( E i ) implies:
m ( i = 1 n ( χ E i ,   1 Ω χ E i ) ) = i = 1 n m ( ( χ E i ,   1 Ω χ E i ) ) .
Definition 4.
Let ξ = { A 1 , , A I } , η = { B 1 , , B J } be two IF-partitions of F . The IF-partition η is said to be a refinement of ξ (with respect to m) if for each A i ξ there exists a subset α i { 1 , , J } such that m ( A i ) = m ( j α i B j ) = j α i m ( B j ) , α i α j = Ø, for i j , and i = 1 I α i = { 1 , , J } .
In the case that η is a refinement of ξ , we write ξ   η .
Denote by M the family of all mappings A = ( μ A ,   ν A ) :   Ω 0 ,   1 × 0 ,   1 . If A = ( μ A ,   ν A ) , and B = ( μ B ,   ν B ) are two elements of M , then we put A B = ( μ A + μ B ,   ν A + ν B 1 Ω ) , and A B = ( μ A μ B ,   ν A + ν B ν A ν B ) .
Theorem 1.
Let m : F [ 0 , 1 ] be a state. Then, the mapping m ¯ : M [ 0 ,   1 ] defined, for any element A = ( μ A ,   ν A ) of M , by
m ¯ ( ( μ A ,   ν A ) ) = m ( ( μ A ,   0 Ω ) ) m ( ( 0 Ω ,   1   ν A ) )
is a state, and m ¯ / F = m , i.e., m ¯ ( A ) = m ( A ) , for any A F .
Proof. 
The proof can be found in [68]. ☐
Proposition 1.
Let A F such that m ( A ) = 1 . Then, m ( A B ) = m ( B ) , for any B F .
Proof. 
Put C = ( 1 Ω μ A ,   1 Ω   ν A ) . Then:
A C = ( μ A + 1 Ω μ A ,   ν A + 1 Ω ν A 1 Ω ) = ( 1 Ω ,   0 Ω ) = 1 , A B B C = ( μ A μ B ,     ν A + ν B ν A ν B ) ( μ B ( 1 Ω μ A ) ,   ν B + 1 Ω ν A ν B ( 1 Ω ν A ) ) = B , 1 = m ¯ ( A ) + m ¯ ( C ) = 1 + m ¯ ( C ) ,
hence, m ¯ ( C ) = 0 . From the monotonicity of m ¯ it follows m ¯ ( B C ) m ¯ ( C ) = 0 .
Therefore:
m ( B ) = m ¯ ( B ) = m ¯ ( A B ) + m ¯ ( B C ) = m ¯ ( A B ) = m ( A B ) .  
Proposition 2.
Let ξ = { A 1 , , A n } be an IF-partition of F . Then i = 1 n m ( A i B ) = m ( B ) , for any B F .
Proof. 
Since m ( i = 1 n A i ) = 1 , by Proposition 1 and (F3) we get:
m ( B ) = m ( ( i = 1 n A i ) B ) = m ( i = 1 n ( A i B ) ) = i = 1 n m ( A i B ) .  
Definition 5.
Let ξ = { A 1 , , A I } , η = { B 1 , , B J } , be two IF-partitions of F . Their join ξ   η is defined as the system ξ   η = { A i B j ;   i = 1 , , I ,   j = 1 , , J } , if ξ   η , and ξ   ξ = ξ .
Theorem 2.
If ξ , η are two IF-partitions of F , then ξ   η is also an IF-partition of F , and ξ   ξ   η , η   ξ   η .
Proof. 
Let ξ = { A 1 , , A I } , η = { B 1 , , B J } . Since i = 1 I A i , and j = 1 J B j exist, according to (F3) we obtain that i = 1 I j = 1 J ( A i B j ) also exists, and i = 1 I j = 1 J ( A i B j ) = ( i = 1 I A i ) ( j = 1 J B j ) .
By Definition 2 we have:
m ( i = 1 I j = 1 J ( A i B j ) ) = i = 1 I j = 1 J m ( A i B j ) .
Moreover, using Proposition 1 we get:
m ( i = 1 I j = 1 J ( A i B j ) ) = m ( ( i = 1 I A i ) ( j = 1 J B j ) ) = m ( j = 1 J B j ) = 1 .
This means that ξ   η = { A i B j ;   i = 1 , , I ,   j = 1 , , J } is an IF-partition of F .
Since the system ξ   η is indexed by { ( i , j ) ;   i = 1 , , I ,   j = 1 , , J } , we put α i = { ( i , 1 ) , , ( i , J ) } , i = 1 , 2 , , I . Since m ( j = 1 J B j ) = 1 , according to Proposition 1 and (F3), for i = 1 , 2 , , I , we get:
m ( A i ) = m ( ( j = 1 J B j ) A i ) = m ( j = 1 J ( B j A i ) ) = j = 1 J m ( A i B j ) = ( r , j ) α i m ( A r B j ) .
However, this means that ξ   ξ   η .

3. Logical Entropy of IF-Partitions

It is obvious that each IF-partition ξ = { A 1 , , A n } represents, from the point of view of classical probability theory, a random experiment with a finite number of results A i , i = 1 ,   2 , , n , that are intuitionistic fuzzy events, with a probability distribution p i = m ( A i ) , i = 1 ,   2 , , n . Namely, p i 0 for i = 1 ,   2 , , n , and i = 1 n p i = i = 1 n m ( A i ) = 1 .   For that reason, we define the logical entropy of ξ = { A 1 , , A n } as the number:
H ( ξ ) = i = 1 n m ( A i ) ( 1 m ( A i ) ) .  
Since i = 1 n m ( A i ) = 1 ,   we can also write:
H ( ξ ) = 1 i = 1 n ( m ( A i ) ) 2 .  
Remark 3.
Evidently, the IF-partition η = { 1 } has zero logical entropy.
Example 2.
Consider the measurable space ( Ω , S ) , where Ω is the unit interval [0,1] and S is the σ algebra of all Borel subsets of [0,1]. Now, we can consider the family of all S-measurable IF-events F = { A = ( μ A ,   ν A ) ;     μ A ,     ν A :     [ 0 ,   1 ] [ 0 ,   1 ]     a r e     S m e a s u r a b l e     w i t h     μ A +   ν A 1 Ω } , and the state m : F [ 0 ,   1 ] defined, for any element A = ( μ A ,   ν A ) of F , by the formula:
m ( A ) = 0 1 μ A d x   + 1 0 1 ( μ A +     ν A ) d x   .
Put A 1 = A 2 = ( 0.1 Ω ,     0.5 Ω ) . Since A 1 A 2 = ( 0.2 Ω ,   0 Ω ) (and therefore A 1 A 2 exists), and m ( A 1 A 2 ) = 0 1 0.2   d x   + 1 0 1 0.2   d x   = 1 , the set ξ = { A 1 ,   A 2 } is an IF-partition. It has the m-state values 0.5 ,   0.5 of the corresponding elements and the logical entropy H ( ξ ) = 0.5   .
Some basic properties of the logical entropy of IF-partitions are listed below.
Theorem 3.
Let ξ , η be two IF-partitions of F . Then:
(i)
H ( ξ ) 0 ;
(ii)
ξ   η implies H ( ξ ) H ( η ) ;
(iii)
H ( ξ η ) max ( H ( ξ ) ; H ( η ) ) .
Proof. 
The property (i) is evident. We will prove the second property. Let ξ = { A 1 , , A I } , η = { B 1 , , B J } , ξ   η . Then, for any A i ξ there exists a subset α i { 1 , , J } , such that m ( A i ) = j α i m ( B j ) , α i α j = , for i j , and i = 1 I α i = { 1 , , J } . Hence, we can write:
H ( ξ ) = i = 1 I m ( A i ) ( 1 m ( A i ) ) = i = 1 I ( m ( A i ) m ( A i )   m ( A i ) ) = i = 1 I ( j α i m ( B j ) j α i m ( B j ) j α i m ( B j ) )   .
As a consequence of the inequality ( a + b ) 2 a 2 + b 2 , which is true for all non-negative real numbers a ,   b , we get:
j α i m ( B j ) j α i m ( B j ) j α i ( m ( B j ) ) 2 ,   i = 1 , , I .
Therefore:
H ( ξ ) i = 1 I ( j α i m ( B j ) j α i ( m ( B j ) ) 2 ) = i = 1 I j α i ( m ( B j ) ( m ( B j ) ) 2 ) =   j = 1 J m ( B j ) ( 1 m ( B j ) ) = H ( η ) .
The inequality (iii) is a simple consequence of the previous property and Theorem 2. ☐
Definition 6.
If ξ = { A 1 , , A I } , η = { B 1 , , B J } are two IF-partitions of F , then the conditional logical entropy of ξ assuming a realization of the IF-experiment η is defined by the formula:
H ( ξ / η )   = i = 1 I j = 1 J m ( A i B j ) ( m ( B j ) m ( A i B j ) ) .
Remark 4.
Since m ( A i B j ) m ( B j ) , for the conditional logical entropy it holds that H ( ξ / η )   0 . If we put   η = { 1 } , then H ( ξ / η )   = H ( ξ ) .
Remark 5.
Since by Proposition 2, it holds that i = 1 I m ( A i B j ) = m ( B j ) , for j = 1 , , J , we can also write:
H ( ξ / η )   = j = 1 J ( m ( B j ) ) 2     i = 1 I j = 1 J ( m ( A i B j ) ) 2 .
Theorem 4.
Let ξ , η be two IF-partitions of F . Then
H ( ξ / η )   = H ( ξ η ) H ( η ) .
Proof. 
Assume that ξ = { A 1 , , A I } , η = { B 1 , , B J } . Let us calculate:
H ( η ) + H ( ξ / η )   = 1 j = 1 J ( m ( B j ) ) 2     + j = 1 J ( m ( B j ) ) 2     i = 1 I j = 1 J ( m ( A i B j ) ) 2 = 1 i = 1 I j = 1 J ( m ( A i B j ) ) 2 = H ( ξ η ) .  
Remark 6.
As a simple consequence of Theorem 4, we get:
H ( ξ η ) = H ( ξ ) + H ( η / ξ ) ,
and according to Definition 5 we obtain that H ( ξ / ξ )   = 0 .
Theorem 5.
Let ξ , η be two IF-partitions of F . Then
(i)
H ( ξ / η )   H ( ξ ) ;
(ii)
H ( ξ η ) H ( ξ ) +   H ( η ) .
Proof. 
(i) Assume that ξ = { A 1 , , A I } , η = { B 1 , , B J } . Since by Proposition 2, we have:
j = 1 J m ( A i B j ) =   m ( A i ) ,   for   i = 1 , , I ,
for i = 1 , , I , it holds:
j = 1 J m ( A i B j ) ( m ( B j ) m ( A i B j ) ) ( j = 1 J m ( A i B j ) )   ( j = 1 J ( m ( B j ) m ( A i B j ) ) ) = m ( A i ) ( j = 1 J m ( B j ) j = 1 J m ( A i B j ) ) = m ( A i ) ( 1 m ( A i ) ) .
Therefore, we get:
H ( ξ / η )   = i = 1 I j = 1 J m ( A i B j ) ( m ( B j ) m ( A i B j ) ) i = 1 I m ( A i ) ( 1 m ( A i ) ) = H ( ξ ) .
(ii) The property (i) along with (7) implies:
H ( ξ η ) = H ( η ) + H ( ξ / η )   H ( η ) + H ( ξ ) .
The proof is complete. ☐
Theorem 6.
Let ξ , η , ς be IF-partitions of F . Then
H ( ξ η / ς ) = H ( ξ / ς ) + H ( η / ς ξ ) .
Proof. 
Let ξ = { A 1 , , A I } , η = { B 1 , , B J } , ς = { C 1 , , C K } . Then by Equation (5) we get:
H ( ξ / ς ) + H ( η / ς ξ ) = k = 1 K ( m ( C k ) ) 2 i = 1 I k = 1 K ( m ( A i C k ) ) 2 + k = 1 K i = 1 I ( m ( C k A i ) ) 2 j = 1 J k = 1 K i = 1 I ( m ( B j C k A i ) ) 2 = k = 1 K ( m ( C k ) ) 2 j = 1 J k = 1 K i = 1 I ( m ( B j C k A i ) ) 2 = H ( ξ η / ς ) .  
Theorem 7.
Let ξ 1 , ξ 2 , , ξ n and η be IF-partitions of F . Then
(i)
H ( ξ 1   ξ 2   ξ n )   = H ( ξ 1 )   + i = 2 n H ( ξ i / k = 1 i 1 ξ k ) ;
(ii)
H ( i = 1 n ξ i / η )     = H ( ξ 1 / η )     + i = 2 n H ( ξ i / ( k = 1 i 1 ξ k ) η ) .
Proof. 
(i) We shall prove the statement using mathematical induction. By Equation (7), we have:
H ( ξ 1 ξ 2 ) = H ( ξ 1 ) + H ( ξ 2 / ξ 1 ) .
For n = 3 , using the previous equality and Theorem 6, we get:
H ( ξ 1 ξ 2 ξ 3 ) = H ( ξ 1 ) + H ( ξ 2 ξ 3 / ξ 1 ) = H ( ξ 1 ) + H ( ξ 2 / ξ 1 ) + H ( ξ 3 / ξ 2 ξ 1 ) = H ( ξ 1 ) + i = 2 3 H ( ξ i / k = 1 i 1 ξ k ) .
Now let us suppose that the result is true for a given n N . Then
H ( ξ 1 ξ 2 ξ n ξ n + 1 ) = H ( ξ 1 ξ 2 ξ n ) + H ( ξ n + 1 / ξ 1 ξ 2 ξ n ) = H ( ξ 1 )   + i = 2 n H ( ξ i / k = 1 i 1 ξ k ) + H ( ξ n + 1 / ξ 1 ξ 2 ξ n ) = H ( ξ 1 )   + i = 2 n + 1 H ( ξ i / k = 1 i 1 ξ k ) .
Thus, by the principle of mathematical induction, the result follows.
(ii) The proof of the second assertion is analogous; it suffices to use Theorem 6 and the principle of mathematical induction. ☐

4. Logical Mutual Information of IF-Partitions

In this section, using the results of the previous parts, we define the notions of logical mutual information and logical conditional mutual information of IF-partitions and prove basic properties of these measures. We also present some numerical examples to illustrate the results.
Definition 7.
Let ξ , η be two IF-partitions of F . Then, we define the logical mutual information of ξ and η by the formula:
I ( ξ , η ) = H ( ξ ) H ( ξ / η ) .
Remark 7.
As a simple consequence of Equation (6), we have:
I ( ξ , η ) = H ( ξ ) + H ( η )     H ( ξ η ) .
From Equation (9), it follows that I ( ξ , η ) = I ( η ,   ξ ) , and I ( ξ ,   ξ ) = H ( ξ ) .
Theorem 8.
Let ξ , η be two IF-partitions of F . Then
0 I ( ξ , η ) min ( H ( ξ ) ,   H ( η ) ) .
Proof. 
The non-negativity of logical mutual information I ( ξ , η ) follows from the subadditivity of logical entropy (the property (ii) of Theorem 5) and Equation (9). The second inequality is a consequence of Equation (9) and the property (iii) of Theorem 3. ☐
Example 3.
Consider the family F of IF-events from Example 2 and the state m : F [ 0 ,   1 ] defined by the formula:
m ( A ) = 0 1 μ A d x   + 1 2   ( 1 0 1 ( μ A +   ν A ) d x   ) .
Put A 1 = ( μ A 1 ,   ν A 1 ) , where the functions μ A 1 ,   ν A 1 :   [ 0 ,   1 ] [ 0 ,   1 ] are defined by μ A 1 ( x ) = x , ν A 1 ( x ) = 1 x , for every x 0 ,   1 , and A 2 = ( μ A 2 ,   ν A 2 ) , where the functions μ A 2 ,   ν A 2 :   [ 0 ,   1 ] [ 0 ,   1 ] are defined by μ A 2 ( x ) = 1 x , ν A 2 ( x ) = x , for every x 0 ,   1 . Evidently, the set ξ = { A 1 ,   A 2 } is an IF-partition with the m-state values 1 2 ,   1 2 of the corresponding elements, and the logical entropy H ( ξ ) = 1 2   . Further, we put B 1 = ( μ B 1 ,   ν B 1 ) , where the functions μ B 2 ,   ν B 2 :   [ 0 ,   1 ] [ 0 ,   1 ] are defined by μ B 1 ( x ) = x 2 , ν B 1 ( x ) = 1 x 2 , for every x 0 ,   1 , and B 2 = ( μ B 2 ,   ν B 2 ) , where the functions μ B 2 ,   ν B 2 :   [ 0 ,   1 ] [ 0 ,   1 ] are defined by μ B 2 ( x ) = 1 x 2 , ν B 2 ( x ) = x 2 , for every x 0 ,   1 . Then, the set η = { B 1 ,   B 2 } is an IF-partition with the m-state values 1 3 ,   2 3 of the corresponding elements and the logical entropy H ( η ) = 4 9 . The join of ξ and η is the system ξ   η = { A 1 B 1 , A 1 B 2 , A 2 B 1 , A 2 B 2 } , where μ A 1 B 1 ( x ) = x 3 , ν A 1 B 1 ( x ) = 1 x 3 , μ A 1 B 2 ( x ) = x ( 1 x 2 ) , ν A 1 B 2 ( x ) = 1 x ( 1 x 2 ) , μ A 2 B 1 ( x ) = ( 1 x ) x 2 , ν A 2 B 1 ( x ) = 1 ( 1 x ) x 2 , μ A 2 B 2 ( x ) = ( 1 x ) ( 1 x 2 ) , ν A 2 B 2 ( x ) = 1 ( 1 x ) ( 1 x 2 ) , x 0 ,   1 , with the m-state value 1 4 ,   1 4 , 1 12 , 5 12 of the corresponding elements. The logical entropy of ξ   η is the number:
H ( ξ   η ) = 1 11 36 = 0.6945 .
Let us calculate the logical mutual information I ( ξ , η ) of IF-partitions ξ = { A 1 ,   A 2 } , η = { B 1 ,   B 2 } . By Equation (9), we get:
I ( ξ , η ) = 1 2 + 4 9   1 + 11 36 = 9 36 = 1 4 = 0.25 .
Theorem 9.
If IF-partitions ξ = { A 1 , , A I } , and η = { B 1 , , B J } are independent, i.e., m ( A i B j ) = m ( A i ) m ( B j ) , for i = 1 , , I ,   j = 1 , , J , then I ( ξ , η ) = H ( ξ ) H ( η ) .
Proof. 
Let us calculate:
I ( ξ , η ) = H ( ξ ) + H ( η )     H ( ξ η ) = 1 i = 1 I ( m ( A i ) ) 2   + 1 j = 1 J ( m ( B j ) ) 2   1 + i = 1 I j = 1 J ( m ( A i B j ) ) 2 = 1 i = 1 I ( m ( A i ) ) 2   j = 1 J ( m ( B j ) ) 2   + i = 1 I ( m ( A i ) ) 2   j = 1 J ( m ( B j ) ) 2 = ( 1 i = 1 I ( m ( A i ) ) 2 )   ( 1 j = 1 J ( m ( B j ) ) 2 )   = H ( ξ ) H ( η ) .  
Corollary 1.
If IF-partitions ξ , η are independent, then
1 H ( ξ η ) = ( 1 H ( ξ ) )     ( 1 H ( η ) ) .
Proof. 
Let us calculate:
( 1 H ( ξ ) )     ( 1 H ( η ) ) = 1 H ( ξ ) H ( η ) + H ( ξ ) H ( η ) = 1 H ( ξ ) H ( η ) + I ( ξ , η ) = 1 H ( ξ ) H ( η ) + H ( ξ ) + H ( η ) H ( ξ η ) = 1 H ( ξ η ) .  
In the following part, we define the logical conditional mutual information of IF-partitions and, using this notion, we establish the chain rules for logical mutual information of IF-partitions.
Definition 8.
Let ξ , η , ς be IF-partitions of F . Then, the logical conditional mutual information of ξ and η assuming a realization of ς is defined by the formula:
I ( ξ , η / ς ) = H ( ξ / ς ) H ( ξ / η ς ) .
Theorem 10.
For IF-partitions ξ , η , ς of F , it holds:
I ( ξ , η ς ) = I ( ξ , η ) + I ( ξ , ς / η ) = I ( ξ , ς ) + I ( ξ , η / ς ) .
Proof. 
Let us calculate:
I ( ξ , η ) + I ( ξ , ς / η ) = H ( ξ )   H ( ξ / η ) + H ( ξ / η ) H ( ξ / η ς ) = H ( ξ )   H ( ξ / η ς ) = I ( ξ , η ς ) .
The second equality is obtained analogously. ☐
The result of the previous theorem is illustrated by the following example.
Example 4.
Consider the family F of IF-events from Example 2, the state m : F [ 0 ,   1 ] defined by Equation ( 10), and the IF-partitions ξ = { A 1 ,   A 2 } , η = { B 1 ,       B 2 } from the previous example. In addition, put ς = { C 1 ,       C 2 } , where μ C 1 ( x ) = x 3 , ν C 1 ( x ) = 1 x 3 , μ C 2 ( x ) = 1 x 3 , ν C 2 ( x ) = x 3 , for every x 0 ,   1 . We will show that I ( ξ , η ς ) = I ( ξ , η ) + I ( ξ , ς / η ) . The join of η and ς is the system η   ς = { B 1 C 1 ,   B 1 C 2 ,   B 2 C 1 ,   B 2 C 2 } with the m-state values 1 6 ,   1 6 , 1 12 , 7 12 of the corresponding elements. By simple calculation, we obtain:
H ( ξ / η ς ) = 0.402777   0.229614 = 0.173163   ;
and consequently
I ( ξ , ς / η ) = H ( ξ / η ) H ( ξ / η ς ) = 0.25 0.173163 = 0.076837 .
By definition we have:
I ( ξ , η ς ) = H ( ξ ) H ( ξ / η ς ) = 0.5 0.173163 = 0.326837 .
In Example 3, we have calculated that I ( ξ , η ) = 0.25 . It is now possible to verify that the equality I ( ξ , η ς ) = I ( ξ , η ) + I ( ξ , ς / η ) is valid.
Theorem 11.
(Chain rules for logical mutual information). Let ξ 1 , ξ 2 , , ξ n and η be IF-partitions of F . Then, for n = 1 , 2 , , it holds:
I ( i = 1 n ξ i , η ) = I ( ξ 1 / η )   + i = 2 n I ( ξ i , η / k = 1 i 1 ξ k ) .
Proof. 
By (8), Theorem 7, and (11), we obtain
I ( i = 1 n ξ i , η ) =   H ( i = 1 n ξ i ) H ( i = 1 n ξ i / η ) = H ( ξ 1 )   + i = 2 n H ( ξ i / k = 1 i 1 ξ k ) H ( ξ 1 / η )   i = 2 n H ( ξ i / ( k = 1 i 1 ξ k ) η ) = I ( ξ 1 / η )   + i = 2 n ( H ( ξ i / k = 1 i 1 ξ k )   H ( ξ i / ( k = 1 i 1 ξ k ) η ) ) = I ( ξ 1 / η )   + i = 2 n I ( ξ i , η / k = 1 i 1 ξ k ) .  
Definition 9.
Let ξ , η , ς be IF-partitions of F . We say that ξ is conditionally independent to ς assuming a realization of η (and write ξ   η   ς ) if I ( ξ , ς / η ) = 0 .
Theorem 12.
For IF-partitions ξ , η , ς of F , it holds: ξ   η   ς if and only if ς   η   ξ .
Proof. 
Let ξ   η   ς , i.e., I ( ξ , ς / η ) = 0 . Then H ( ξ / η ) = H ( ξ / η ς ) , and by Equation (6) we get:
H ( ξ / η ) = H ( ξ / η ς ) = H ( ξ η ς ) H ( η ς ) .
Let us calculate:
I ( ς , ξ / η ) = H ( ς / η ) H ( ς / ξ η ) = H ( ς η ) H ( η ) H ( ξ η ς ) + H ( ξ η ) = H ( ξ / η )     H ( ξ / η ) = 0 .
However, this indicates that ς   η   ξ . The reverse implication is obvious. ☐
Remark 8.
According to the previous theorem, we may say that ξ and ς are conditionally independent, assuming a realization of η , and write ξ   η   ς instead of ξ   η   ς .
Theorem 13.
For IF-partitions ξ , η , ς of F such that ξ   η   ς , we have
(i)
I ( ξ η , ς ) = I ( η , ς ) ;
(ii)
I ( η , ς ) = I ( ξ , ς ) + I ( ς , η / ξ ) ;
(iii)
I ( ξ , η / ς ) I ( ξ , η ) .
Proof. 
(i) Since by the assumption I ( ξ , ς / η ) = 0 , using the chain rule for logical mutual information, we obtain:
I ( ξ η , ς ) = I ( η ξ , ς ) = I ( η , ς ) + I ( ξ , ς / η ) = I ( η , ς ) .
(ii) By Theorem 10, we have I ( ξ η , ς ) = I ( ς , ξ ) + I ( ς , η / ξ ) . Hence using (i), we can write:
I ( η , ς ) = I ( ξ η , ς ) = I ( ς , ξ ) + I ( ς , η / ξ ) .
(iii) From (ii), it follows the inequality I ( ς , η / ξ ) I ( ς , η ) . By Theorem 12, we can interchange ξ and   ς . Doing so, we obtain the inequality I ( ξ , η / ς ) I ( ξ , η ) .
We note that, in the classical theory, the last claim of Theorem 13 is known as the data processing inequality.

5. Logical Entropy of IF-Dynamical Systems

The classical dynamical system is a quadruplet ( Ω ,   S ,   P ,   T ) , where ( Ω ,     S ,     P ) is a probability space, and T : Ω Ω is a measure preserving map, i.e., A S implies T 1 ( A ) S , and P ( T 1 ( A ) ) = P ( A ) . Define τ : S S by the equality τ ( A ) = T 1 ( A ) , for any A S . Then, τ is a mapping with the property P ( τ ( A ) ) = P ( A ) , for any A S . In addition, τ ( A B ) = T 1 ( A B ) = T 1 ( A ) T 1 ( B ) = τ ( A ) τ ( B ) , for any A , B S ; analogously, τ ( A B ) = τ ( A ) τ ( B ) , for any A , B S . It is a motivation for the following definition.
Definition 10.
Let F be the family of all IF-events and m : F 0 ,   1 be a state. Then, the triplet ( F ,   m ,   τ ) will be called an IF-dynamical system, if τ : F F is such a mapping that the following conditions are satisfied:
(i)
A F implies τ ( A ) F , and m ( A ) = m ( τ ( A ) ) ;
(ii)
if A ,   B ,   C F , and A     B   =   C , then τ ( C ) = τ ( A )     τ ( B ) ;
(iii)
if A ,     B F , then τ ( A B ) = τ ( A )     τ ( B ) .
Proposition 3.
Let any IF-dynamical system ( F , m , τ ) be given. If ξ = { A 1 , , A n } is an IF-partition of F , then the system τ ξ = { τ ( A 1 ) , , τ ( A n ) } is also an IF-partition of F .
Proof. 
Since i = 1 n A i exists, according to Definition 8, τ ( i = 1 n A i ) F , and τ ( i = 1 n A i ) = i = 1 n τ   ( A i ) . This means that i = 1 n τ   ( A i ) exists. Moreover, we have:
m ( i = 1 n τ   ( A i ) ) = m ( τ ( i = 1 n A i ) ) = m ( i = 1 n A i ) = 1 ,
and
m ( i = 1 n τ   ( A i ) ) = m ( i = 1 n A i ) = i = 1 n m ( A i ) = i = 1 n m ( τ   ( A i ) ) .  
Define τ 2 = τ τ , and put τ k = τ τ k 1 ,   k = 1 , 2 , , where τ 0 is the identical mapping on F .
Theorem 14.
Let any IF-dynamical system ( F , m , τ ) be given. If ξ ,   η are IF-partitions of F , then the following properties are satisfied:
(i)
τ   ( ξ η ) =   τ ξ   τ   η ;
(ii)
ξ   η implies τ ξ τ η
(iii)
H ( τ k ξ ) = H ( ξ ) , k = 0 , 1 , 2 , ;
(iv)
H ( τ k ξ / τ k η ) = H ( ξ / η ) , k = 0 , 1 , 2 , ;
(v)
H ( i = 0 n 1 τ i ξ ) = H ( ξ ) +     j = 1 n 1 H ( ξ / i = 1 j τ i ξ )
Proof. 
Assume that ξ = { A 1 , , A I } , η = { B 1 , , B J } .
The property (i) follows from the condition τ ( A i B j ) = τ ( A i )     τ ( B j ) , i = 1 , , I ,   j = 1 , , J .
(ii) If ξ   η , then for each A i ξ , there exists a subset α i { 1 , , J } such that m ( A i ) = m ( j α i B j ) = j α i m ( B j ) , α i α j = Ø for i j , and i = 1 I α i = { 1 , , J } . We get:
m ( τ ( A i ) ) = m ( τ ( j α i B j ) ) = m ( j α i τ ( B j ) ) = j α i m ( τ ( B j ) ) , i = 1 , 2 , , I .
However, this indicates that τ ξ   τ η .
(iii) Since m ( τ k ( A i ) ) = m ( A i ) , for i = 1 , 2 , , I , k = 0 , 1 , 2 , , we get:
H ( τ k ξ ) = 1 i = 1 I ( m ( τ k ( A i ) ) ) 2 =   1 i = 1 I ( m ( A i ) ) 2 =     H ( ξ ) .
(iv) The proof is analogous to the proof of the previous property.
(v) We will prove by mathematical induction. For the case of n = 2 , the equality holds by Equation (7). We assume that the statement holds for a given n N and we prove it is true for n + 1 . By part (iii), we have:
H ( i = 1 n τ i ξ ) = H ( τ ( i = 0 n 1 τ i ξ ) ) = H ( i = 0 n 1 τ i ξ ) .
Therefore, by Equation (7) and the induction assumption, we can write:
H ( i = 0 n τ i ξ ) = H ( ( i = 1 n τ i ξ ) ξ ) = H ( i = 1 n τ i ξ ) + H ( ξ / i = 1 n τ i ξ ) = H ( i = 0 n 1 τ i ξ ) + H ( ξ / i = 1 n τ i ξ ) = H ( ξ ) +   j = 1 n 1 H ( ξ / i = 1 j τ i ξ ) + H ( ξ / i = 1 n τ i ξ ) = H ( ξ ) +   j = 1 n H ( ξ / i = 1 j τ i ξ ) .
The proof is complete. ☐
Lemma 1.
Let { a n } n = 1 be a sequence of non-negative real numbers such that a r + s a r +   a s , for every r , s N . Then lim n 1 n a n exists.
Proof. 
The proof can be found in [69]. ☐
Proposition 4.
Let ( F , m , τ ) be an IF-dynamical system, and ξ be an IF-partition of F . Then, there exists the following limit:
lim n 1 n H ( i = 0 n 1 τ i ξ ) .
Proof. 
Put a n = H ( i = 0 n 1 τ i ξ ) . According to Theorem 5 and property (iii) of the previous theorem, for every r , s N , we have:
a r + s = H ( i = 0 r + s 1 τ i ξ ) H ( i = 0 r 1 τ i ξ ) + H ( i = r r + s 1 τ i ξ ) = a r + H ( τ r ( i = 0 s 1 τ i ξ ) ) = a r + H ( i = 0 s 1 τ i ξ ) = a r + a s .
Hence, by Lemma 1, lim n 1 n H ( i = 0 n 1 τ i ξ ) . exists. ☐
Definition 11.
Let ( F , m , τ ) be an IF-dynamical system, and ξ be any IF-partition of F . The logical entropy of τ with respect to ξ is defined by:
h ( τ , ξ ) = lim n 1 n H ( i = 0 n 1 τ i ξ ) .
The logical entropy of an IF-dynamical system ( F ,   m ,   τ ) is defined by the formula:
h ( τ ) = sup { h ( τ , ξ ) ;   ξ   i s   a n   I F - p a r t i t i o n   o f   F } .
Example 5.
Let F be the family of all IF-events and m : F 0 ,   1 be a state. Then, the triplet ( F ,   m ,   I ) , where I : F F is an identity mapping, is a trivial case of an IF-dynamical system. The operation is idempotent, therefore:
h ( I , ξ ) = lim n 1 n H ( i = 0 n 1 I i ξ ) = lim n 1 n H ( ξ ) = 0 ,   f o r   e v e r y   I F - p a r t i t i o n   ξ   of   F ,
and the logical entropy of ( F ,   m ,   I ) is h ( I ) = sup { h ( I , ξ ) ; ξ   i s   a n   I F - p a r t i t i o n   o f   F } = 0 .
Theorem 15.
Let any IF-dynamical system ( F , m , τ ) be given. If ξ ,   η are IF-partitions of F such that ξ   η , then h ( τ , ξ ) h ( τ , η ) .
Proof. 
If ξ   η , then i = 0 n 1 τ i ξ i = 0 n 1 τ i η , for n = 1 , 2 ,   . By property (ii) from Theorem 3, we have H ( i = 0 n 1 τ i ξ ) H ( i = 0 n 1 τ i η ) , for n = 1 , 2 ,   . Hence, h ( τ , ξ ) h ( τ , η ) .
Definition 12.
Two IF-dynamical systems ( F 1 , m 1 , τ 1 ) , ( F 2 ,   m 2 ,   τ 2 ) are said to be isomorphic if there exists a bijective mapping ψ : F 1 F 2 satisfying the following conditions:
(i)
ψ ( τ 1 ( A ) ) = τ 2 ( ψ ( A ) ) , for every A F 1 ;
(ii)
ψ ( A B ) = ψ ( A ) ψ ( B ) , for every A , B F 1 ;
(iii)
for every A , B F 1 , A B exists if and only if ψ ( A ) ψ ( B ) exists, and ψ ( A B ) = ψ ( A ) ψ ( B ) ;
(iv)
m 1 ( A ) = m 2 ( ψ ( A ) ) , for every A F 1 .
Lemma 2.
Let ( F 1 , m 1 , τ 1 ) , ( F 2 ,     m 2 ,     τ 2 ) be isomorphic IF-dynamical systems wherein a mapping ψ : F 1 F 2 represents their isomorphism. Let ξ = { A 1 , , A n } be an IF-partition of F 1 . Then, the system ψ ( ξ ) = { ψ ( A 1 ) , , ψ ( A n ) } is an IF-partition of F 2 with the logical entropy H ( ψ ( ξ ) ) = H ( ξ ) , and moreover, h ( τ 2 , ψ ( ξ ) ) = h ( τ 1 , ξ ) .
Proof. 
Since i = 1 n A i exists, by condition (iii) of the previous definition i = 1 n ψ ( A i ) exists, and it holds ψ ( i = 1 n A i ) = i = 1 n ψ ( A i ) . Therefore, by condition (iv) of the previous definition, we can write:
m 2 ( i = 1 n ψ ( A i ) ) = m 2 ( ψ ( i = 1 n A i ) ) = m 1 ( i = 1 n A i ) = 1 .
On the other hand, m 2 ( i = 1 n ψ ( A i ) ) = i = 1 n m 2 ( ψ ( A i ) ) . This means that ψ ( ξ ) = { ψ ( A 1 ) , , ψ ( A n ) } is an IF-partition of F 2 . Let us calculate:
H ( ψ ( ξ ) ) = i = 1 n m 2 ( ψ ( A i ) )   ( 1 m 2 ( ψ ( A i ) ) ) = i = 1 n m 1 ( A i ) ( 1 m 1 ( A i ) ) = H ( ξ ) .
Consequently, using conditions (i) and (ii) of the previous definition, we get:
h ( τ 2 , ψ ( ξ ) ) = lim n 1 n H ( i = 0 n 1 τ 2 i ψ ( ξ ) ) = lim n 1 n H ( i = 0 n 1 ψ ( τ 1 i ξ ) ) = lim n 1 n H ( ψ ( i = 0 n 1 τ 1 i ξ ) ) = lim n 1 n H ( i = 0 n 1 τ 1 i ξ ) = h ( τ 1 , ξ ) .  
Lemma 3.
Let ( F 1 , m 1 , τ 1 ) , ( F 2 ,   m 2 ,   τ 2 ) be isomorphic IF-dynamical systems wherein a mapping ψ : F 1 F 2 represents their isomorphism. Then, for the inverse ψ 1 :   F 2 F 1 , the following properties are satisfied:
(i)
ψ 1 ( A B ) = ψ 1 ( A ) ψ 1 ( B ) , for every A , B F 2 ;
(ii)
for any A , B F 2 , if A B exists, then ψ 1 ( A ) ψ 1 ( B ) exists, too, and ψ 1 ( A B ) = ψ 1 ( A ) ψ 1 ( B ) ;
(iii)
m 1 ( ψ 1 ( A ) ) = m 2 ( A ) , for every A F 2 ;
(iv)
m 1 ( ( ψ 1 τ 2 ) ( A ) ) = m 1 ( ( τ 1 ψ 1 ) ( A ) ) , for every A F 2 .
Proof. 
Since ψ : F 1 F 2 is bijective, for every A , B F 2 , there exist A , B F 1 such that ψ 1 ( A ) = A , ψ 1 ( B ) = B .
(i)
We get:
ψ 1 ( A B ) = ψ 1 ( ψ ( A ) ψ ( B ) ) = ψ 1 ( ψ ( A B ) ) = A B = ψ 1 ( A ) ψ 1 ( B ) .
(ii)
Let A , B F 2 such that A B exists. Then, ψ 1 ( A B ) exists because ψ is surjective. Let us calculate:
ψ 1 ( A B ) = ψ 1 ( ψ ( A ) ψ ( B ) ) = ψ 1 ( ψ ( A B ) ) = A B = ψ 1 ( A ) ψ 1 ( B ) .
(iii)
Let A F 2 . Then
m 2 ( A ) = m 2 ( ψ ( A ) ) = m 1 ( A ) = m 1 ( ψ 1 ( A ) )     .
(iv)
Let A F 2 . Then we have
m 1 ( ( ψ 1 τ 2 ) ( A ) ) = m 1 ( ψ 1 ( τ 2 ( A ) ) ) = m 2 ( τ 2 ( A ) ) = m 2 ( A ) ,
and
m 1 ( ( τ 1 ψ 1 ) ( A ) ) = m 1 ( τ 1 ( ψ 1 ( A ) ) ) = m 1 ( ψ 1 ( A ) ) = m 2 ( A ) .
Hence, the equality m 1 ( ( ψ 1 τ 2 ) ( A ) ) = m 1 ( ( τ 1 ψ 1 ) ( A ) ) holds. ☐
Theorem 16.
If the IF-dynamical systems ( F 1 , m 1 , τ 1 ) , ( F 2 ,     m 2 ,     τ 2 ) are isomorphic, then h ( τ 1 ) = h ( τ 2 ) .
Proof. 
Let ψ : F 1 F 2 be a mapping representing an isomorphism of IF-dynamical systems ( F 1 ,   m 1 ,   τ 1 ) , ( F 2 ,   m 2 ,   τ 2 ) . By Lemma 2, if ξ = { A 1 , , A n } is an IF-partition of F 1 , then the system ψ ( ξ ) = { ψ ( A 1 ) , , ψ ( A n ) } is an IF-partition of F 2 and h ( τ 2 , ψ ( ξ ) ) = h ( τ 1 , ξ ) . Therefore:
{ h ( τ 1 , ξ ) ;   ξ   is   an   IF - partition     of   F 1 } { h ( τ 2 , η ) ;   η   is   an   IF - partition     of   F 2 } ,
and consequently:
h ( τ 1 ) = sup { h ( τ 1 , ξ ) ;   ξ   is   an   IF - partition     of   F 1 } sup { h ( τ 2 , η ) ;   η   is   an   IF - partition     of   F 2 } = h ( τ 2 ) .
The opposite inequality is obtained in a similar way; it suffices to consider the inverse ψ 1 :   F 2 F 1 . If η = { B 1 , , B n } is an IF-partition of F 2 , then it is easy to verify that ψ 1 ( η ) = { ψ 1 ( B 1 ) , , ψ 1 ( B n ) } is an IF-partition of F 1 . Indeed, since i = 1 n B i exists, according to property (ii) from Lemma 3, i = 1 n ψ 1 ( B i ) exists, too. Moreover, we have:
m 1 ( i = 1 n ψ 1   ( B i ) ) = m 1 ( ψ 1 ( i = 1 n B i ) ) = m 2 ( i = 1 n B i ) = 1 ,
and
m 1 ( i = 1 n ψ 1   ( B i ) ) = i = 1 n m 1 ( ψ 1   ( B i ) )   .
By means of (iii) from the previous lemma, we get:
H ( ψ 1 ( η ) ) = i = 1 n m 1 ( ψ 1 ( B i ) )   ( 1 m 1 ( ψ 1 ( B i ) ) )     = i = 1 n m 2 ( B i )   ( 1 m 2 ( B i ) ) = H ( η ) .
Thus, according to the previous lemma, we can write:
h ( τ 1 , ψ 1 ( η ) ) = lim n     1 n H ( i = 0 n 1 τ 1 i ψ 1 ( η ) )     = lim n     1 n H ( i = 0 n 1 ψ 1 ( τ 2 i η ) ) = lim n     1 n H ( ψ 1 ( i = 0 n 1 τ 2 i η ) ) = lim n     1 n H ( i = 0 n 1 τ 2 i η ) = h ( τ 2 , η ) .
Therefore:
{ h ( τ 2 , η ) ;   is   an   IF - partition     of   F 2 } { h ( τ 1 , ξ ) ;   ξ   is   an   IF - partition   of   F 1 } ,
and consequently:
h ( τ 2 ) = sup { h ( τ 2 , η ) ;   η   is   an   IF - partition   of   F 2 } sup { h ( τ 1 , ξ ) ;   ξ   is   an   IF - partition   of   F 1 } = h ( τ 1 ) .
The proof is completed. ☐
In the final part, we prove an analogy of the Kolmogorov–Sinai theorem on generators for the studied situation. This theorem (see e.g., [69]) is the main tool for calculating the entropy of dynamical system. First, analogously as in [62], we introduce the following definition.
Definition 13.
Let ( F , m , τ ) be an IF-dynamical system and ς be an IF-partition of F . Then ς is called an m-generator of ( F ,   m ,   τ ) if to any IF-partition ξ of F , there exists an integer k > 0 such that ξ i = 0 k τ i ς .
Proposition 5.
Let ( F , m , τ ) be an IF-dynamical system, and ξ be an IF-partition of F . Then, for each natural number k, it holds
h ( τ , ξ ) = h ( τ ,   i = 0 k τ i ξ ) .
Proof. 
Let ξ be any IF-partition of F . Then, for each natural number k, we can write:
h ( τ ,   i = 0 k τ i ξ )     = lim n     1 n H ( j = 0 n 1 τ j ( i = 0 k τ i ξ ) ) = lim n     k + n n 1 k + n H ( t = 0 k + n 1 τ t ξ ) = lim n     1 k + n H ( t = 0 k + n 1 τ t ξ ) = h ( τ , ξ ) .  
Theorem 17.
Let ( F , m , τ ) be an IF-dynamical system and ς be an m-generator of ( F ,   m ,   τ ) . Then
h ( τ ) = h ( τ ,   ς ) .
Proof. 
Let ς be an m-generator of ( F ,   m ,   τ ) . Then to any IF-partition ξ of F , there exists an integer k > 0 such that ξ i = 0 k τ i ς . Consequently by Theorem 15 and Proposition 5, for every IF-partition ξ of F , we have:
h ( τ , ξ ) h ( τ ,     i = 0 k τ i ς ) = h ( τ , ς ) .
Thus, we can conclude:
h ( τ ) = sup { h ( τ , ξ ) ;   ξ   is   an   IF - partition     of   F } = h ( τ , ς ) .  

6. Discussion

The purpose of the present study was to introduce the concepts of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case. Our results have been presented in Section 3, Section 4 and Section 5.
In Section 3, we defined the notions of logical entropy and logical conditional entropy for intuitionistic fuzzy experiments, and proved the basic properties of the proposed measures. It was proved that the logical entropy of intuitionistic fuzzy experiments has properties analogous to the properties of Shannon entropy of measurable partitions, in the sense of classical probability theory. In Section 4, the results of the previous part were used to develop a logical information theory for the intuitionistic fuzzy case. The concepts of logical mutual information and logical conditional mutual information of intuitionistic fuzzy experiments have been introduced, and properties of these measures were studied. Specifically, the chain rule for logical mutual information has been established, and the data processing inequality for conditionally independent IF-partitions was proved. We have also provided some numerical examples to illustrate the results.
In Section 5, the concept of logical entropy of IF-partitions was used to define the logical entropy of IF-dynamical systems. It was shown that the logical entropy of IF-dynamical systems is invariant under any isomorphism. Finally, we have provided an analogy of the Kolmogorov–Sinai theorem on generators for the intuitionistic fuzzy case.
All of the mentioned results can be immediately applied to the fuzzy case. On the other hand, it is hopeful to use the methods developed here in some more general algebraic structures. For example, we mentioned in Theorem 1 the possibility of embedding F to the family M with a state m ¯ extending the state m. Actually M is an example of an MV-algebra with a product [20,21,22,23,24,25,26,27,28]. Further research ought to more fully investigate potential general applications for the methods developed in this work.

Acknowledgments

The authors thank the editor and the referees for their valuable comments and suggestions. The authors are grateful to Constantine the Philosopher University in Nitra for covering the costs to publish in open access.

Author Contributions

Both authors contributed equally and significantly to the study and preparation of the article. They have read and approved the final version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gray, R.M. Entropy and Information Theory; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  2. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  3. Kolmogorov, A.N. New Metric Invariant of Transitive Dynamical Systems and Automorphisms of Lebesgue Spaces. Dokl. Russ. Acad. Sci. 1958, 119, 861–864. [Google Scholar]
  4. Sinai, Y.G. Ergodic Theory with Applications to Dynamical Systems and Statistical Mechanics; Springer: Berlin/Heidelberg, Germany, 1990. [Google Scholar]
  5. Sinai, Y.G. On the Notion of Entropy of a Dynamical System. Dokl. Russ. Acad. Sci. 1959, 124, 768–771. [Google Scholar]
  6. Markechová, D. The entropy of fuzzy dynamical systems and generators. Fuzzy Sets Syst. 1992, 48, 351–363. [Google Scholar] [CrossRef]
  7. Piasecki, K. Probability of fuzzy events defined as denumerable additive measure. Fuzzy Sets Syst. 1985, 17, 271–284. [Google Scholar] [CrossRef]
  8. Mesiar, R. The Bayes principle and the entropy on fuzzy probability spaces. Int. J. Gen. Syst. 1991, 20, 67–72. [Google Scholar] [CrossRef]
  9. Mesiar, R.; Rybárik, J. Entropy of Fuzzy Partitions—A General Model. Fuzzy Sets Syst. 1998, 99, 73–79. [Google Scholar] [CrossRef]
  10. Dumitrescu, D. Entropy of a fuzzy dynamical system. Fuzzy Sets Syst. 1995, 70, 45–57. [Google Scholar] [CrossRef]
  11. Rahimi, M.; Riazi, A. On local entropy of fuzzy partitions. Fuzzy Sets Syst. 2014, 234, 97–108. [Google Scholar] [CrossRef]
  12. Rahimi, M.; Assari, A.; Ramezani, F. A Local Approach to Yager Entropy of Dynamical Systems. Int. J. Fuzzy Syst. 2015, 18, 98–102. [Google Scholar] [CrossRef]
  13. Srivastava, P.; Khare, M.; Srivastava, Y.K. M-equivalence, entropy and F-dynamical systems. Fuzzy Sets Syst. 2001, 121, 275–283. [Google Scholar] [CrossRef]
  14. Markechová, D.; Riečan, B. Entropy of Fuzzy Partitions and Entropy of Fuzzy Dynamical Systems. Entropy 2016, 18, 19. [Google Scholar] [CrossRef]
  15. Riečan, B. An entropy construction inspired by fuzzy sets. Soft Comput. 2003, 7, 486–488. [Google Scholar]
  16. Riečan, B. On a type of entropy of dynamical systems. Tatra Mt. Math. Publ. 1992, 1, 135–140. [Google Scholar]
  17. Riečan, B. On some modifications of the entropy of dynamical systems. Atti del Seminario Matematico e Fisico dell' Universita di Modena 1994, 42, 157–166. [Google Scholar]
  18. Dubois, D.; Prade, M. A review of fuzzy set aggregation connectives. Inf. Sci. 1985, 36, 85–121. [Google Scholar] [CrossRef]
  19. Zadeh, L.A. Fuzzy Sets. Inf. Control 1965, 8, 338–358. [Google Scholar] [CrossRef]
  20. Chang, C.C. Algebraic analysis of many valued logics. Trans. Am. Math. Soc. 1958, 88, 467–490. [Google Scholar] [CrossRef]
  21. Riečan, B.; Mundici, D. Probability on MV-algebras. In Handbook of Measure Theory; Pap, E., Ed.; Elsevier: Amsterdam, The Netherlands, 2002; pp. 869–910. [Google Scholar]
  22. Riečan, B.; Neubrunn, T. Integral, Measure and Ordering; Springer: Dordrecht, The Netherlands, 1997. [Google Scholar]
  23. Dvurečenskij, A.; Pulmannová, S. New Trends in Quantum Structures; Springer: Dordrecht, The Netherlands, 2000. [Google Scholar]
  24. Riečan, B. On the product MV-algebras. Tatra Mt. Math. 1999, 16, 143–149. [Google Scholar]
  25. Montagna, F. An algebraic approach to propositional fuzzy logic. J. Log. Lang. Inf. 2000, 9, 91–124. [Google Scholar] [CrossRef]
  26. Jakubík, J. On product MV algebras. Czechoslov. Math. J. 2002, 52, 797–810. [Google Scholar] [CrossRef]
  27. Di Nola, A.; Dvurečenskij, A. Product MV-algebras. Mult. Valued Log. 2001, 6, 193–215. [Google Scholar]
  28. Markechová, D.; Riečan, B. Kullback–Leibler Divergence and Mutual Information of Experiments in Product MV Algebras. Entropy 2017, 19, 267. [Google Scholar] [CrossRef]
  29. Kôpka, F.; Chovanec, F. D-posets. Math. Slovaca 1994, 44, 21–34. [Google Scholar]
  30. Kôpka, F. Quasiproduct on Boolean D-posets. Int. J. Theor. Phys. 2008, 47, 26–35. [Google Scholar] [CrossRef]
  31. Frič, R. On D-posets of fuzzy sets. Math. Slovaca 2014, 64, 545–554. [Google Scholar] [CrossRef]
  32. Foulis, D.J.; Bennet, M.K. Effect algebras and unsharp quantum logics. Found. Phys. 1994, 24, 1331–1352. [Google Scholar] [CrossRef]
  33. Paulínyová, M. D-posets and effect algebras. Notes Intuit. Fuzzy Sets 2014, 20, 32–40. [Google Scholar]
  34. Kluvancová, D. A-poset with multiplicative operation. In Advances in Intelligent Systems and Computing; Atanassov, K., Ed.; Springer: Cham, Germany, 2016; Volume 401, pp. 51–60. [Google Scholar]
  35. Frič, R.; Papčo, M. On probability domains. Int. J. Theor. Phys. 2010, 49, 3092–3100. [Google Scholar] [CrossRef]
  36. Skřivánek, V.; Frič, R. Generalized random events. Int. J. Theor. Phys. 2015, 54, 4386–4396. [Google Scholar] [CrossRef]
  37. Di Nola, A.; Dvurečenskij, A.; Hyčko, M.; Manara, C. Entropy on Effect Algebras with the Riesz Decomposition Property II: MV-Algebras. Kybernetika 2005, 41, 161–176. [Google Scholar]
  38. Riečan, B. Kolmogorov–Sinaj entropy on MV-algebras. Int. J. Theor. Phys. 2005, 44, 1041–1052. [Google Scholar] [CrossRef]
  39. Petrovičová, J. On the entropy of partitions in product MV-algebras. Soft Comput. 2000, 4, 41–44. [Google Scholar] [CrossRef]
  40. Petrovičová, J. On the entropy of dynamical systems in product MV-algebras. Fuzzy Sets Syst. 2001, 121, 347–351. [Google Scholar] [CrossRef]
  41. Di Nola, A.; Dvurečenskij, A.; Hyčko, M.; Manara, C. Entropy on Effect Algebras with the Riesz Decomposition Property I: Basic Properties. Kybernetika 2005, 41, 143–160. [Google Scholar]
  42. Giski, Z.E.; Ebrahimi, M. Entropy of Countable Partitions on effect Algebras with the Riesz Decomposition Property and Weak Sequential Effect Algebras. Cankaya Univ. J. Sci. Eng. 2015, 12, 20–39. [Google Scholar]
  43. Ebrahimi, M.; Mosapour, B. The Concept of Entropy on D-posets. Cankaya Univ. J. Sci. Eng. 2013, 10, 137–151. [Google Scholar]
  44. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  45. Atanassov, K. More on intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 33, 37–45. [Google Scholar] [CrossRef]
  46. Atanassov, K. Intuitionistic Fuzzy Sets: Theory and Applications; Physica Verlag: New York, NY, USA, 1999. [Google Scholar]
  47. Riečan, B. Probability theory on IF events. In Algebraic and Proof-Theoretic Aspects of Non-Classical Logics; Papers in Honor of Daniele Mundici on the Occasion of his 60th Birthday; Lecture Notes in Computer Science; Springer: New York, NY, USA, 2007; pp. 290–308. [Google Scholar]
  48. Čunderlíková, K. The individual ergodic theorem on the IF-events with product. Soft Comput. 2010, 14. [Google Scholar] [CrossRef]
  49. Farnoosh, R.; Rahimi, M.; Kumar, P. Removing noise in a digital image using a new entropy method based on intuitionistic fuzzy sets. In Proceedings of the International Conference on Fuzzy Systems, Vancouver, BC, Canada, 24–29 July 2016; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2016; pp. 1328–1332. [Google Scholar]
  50. Burillo, P.; Bustince, H. Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets. Fuzzy Sets Syst. 1996, 78, 305–316. [Google Scholar] [CrossRef]
  51. Ďurica, M. Entropy on IF-events. Notes Intuit. Fuzzy Sets 2007, 13, 30–40. [Google Scholar]
  52. Szmidt, E.; Kacprzyk, J. Entropy for intuitionistic fuzzy sets. Fuzzy Sets Syst. 2001, 118, 467–477. [Google Scholar] [CrossRef]
  53. Ban, A. Measurable entropy of intuitionistic fuzzy dynamical system. Notes Intuit. Fuzzy Sets 2000, 6, 35–47. [Google Scholar]
  54. Rao, C.R. Diversity and dissimilarity coefficients. A unified approach. Theor. Popul. Biol. 1982, 21, 24–43. [Google Scholar] [CrossRef]
  55. Good, I.J. Comment (on Patil and Taillie: Diversity as a Concept and its Measurement). J. Am. Stat. Assoc. 1982, 77, 561–563. [Google Scholar]
  56. Patil, G.P.; Taillie, C. Diversity as a Concept and its Measurement. J. Am. Stat. Assoc. 1982, 77, 548–561. [Google Scholar] [CrossRef]
  57. Ellerman, D. An Introduction to Logical Entropy and Its Relation to Shannon Entropy. Int. J. Seman. Comput. 2013, 7, 121–145. [Google Scholar] [CrossRef]
  58. Markechová, D.; Riečan, B. Logical Entropy of Fuzzy Dynamical Systems. Entropy 2016, 18, 157. [Google Scholar] [CrossRef]
  59. Ebrahimzadeh, A.; Giski, Z.E.; Markechová, D. Logical Entropy of Dynamical Systems—A General Model. Mathematics 2017, 5. [Google Scholar] [CrossRef]
  60. Ebrahimzadeh, A. Logical entropy of quantum dynamical systems. Open Phys. 2016, 14. [Google Scholar] [CrossRef]
  61. Ebrahimzadeh, A. Quantum conditional logical entropy of dynamical systems. Ital. J. Pure Appl. Math. 2016, 36, 879–886. [Google Scholar]
  62. Mohammadi, U. The Concept of Logical Entropy on D-posets. J. Algebr. Struct. Their Appl. 2016, 1, 53–61. [Google Scholar]
  63. Riečan, B.; Atanassov, K. Some properties of operations conjuction and disjunction from Łukasiewicz type over intuitionistic fuzzy sets. Part 1. Notes Intuit. Fuzzy Sets 2014, 20, 3. [Google Scholar]
  64. Riečan, B.; Atanassov, K. Some properties of operations conjuction and disjunction from Łukasiewicz type over intuitionistic fuzzy sets. Part 2. Notes Intuit. Fuzzy Sets 2014, 20, 4. [Google Scholar]
  65. Atanassov, K.; Riečan, B. On two operations over intuitionistic fuzzy sets. J. Appl. Math. Stat. Inform. 2006, 2, 145–148. [Google Scholar] [CrossRef]
  66. Gutierrez Garcia, J.; Rodabaugh, S.E. Order-theoretic, topological, categorical redundancies of interval-valued sets, grey sets, vague sets, interval-valued “intuitionistic” sets, “intuitionistic” fuzzy sets and topologies. Fuzzy Sets Syst. 2005, 156, 445–484. [Google Scholar] [CrossRef]
  67. Ciungu, L.; Riečan, B. Representation theorem of probabilities on IFS-events. Inf. Sci. 2010, 180, 793–798. [Google Scholar] [CrossRef]
  68. Riečan, B. On finitely additive IF-states. In Proceedings of the 7th IEEE International Conference Intelligent Systems IS2014, Warsaw, Poland, 24–26 September 2014; Volume 1, pp. 149–156. [Google Scholar]
  69. Walters, P. An Introduction to Ergodic Theory; Springer: New York, NY, USA, 1982. [Google Scholar]

Share and Cite

MDPI and ACS Style

Markechová, D.; Riečan, B. Logical Entropy and Logical Mutual Information of Experiments in the Intuitionistic Fuzzy Case. Entropy 2017, 19, 429. https://doi.org/10.3390/e19080429

AMA Style

Markechová D, Riečan B. Logical Entropy and Logical Mutual Information of Experiments in the Intuitionistic Fuzzy Case. Entropy. 2017; 19(8):429. https://doi.org/10.3390/e19080429

Chicago/Turabian Style

Markechová, Dagmar, and Beloslav Riečan. 2017. "Logical Entropy and Logical Mutual Information of Experiments in the Intuitionistic Fuzzy Case" Entropy 19, no. 8: 429. https://doi.org/10.3390/e19080429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop