Next Article in Journal
Black Hole Entropy in Scalar-Tensor and ƒ(R) Gravity: An Overview
Previous Article in Journal
Entropy: The Markov Ordering Approach
Previous Article in Special Issue
Recovering Matrices of Economic Flows from Incomplete Data and a Composite Prior
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics

Department of Information Sciences, Tokyo University of Science, Noda City, Chiba 278-8510, Japan
*
Authors to whom correspondence should be addressed.
Entropy 2010, 12(5), 1194-1245; https://doi.org/10.3390/e12051194
Submission received: 10 February 2010 / Accepted: 30 April 2010 / Published: 7 May 2010
(This article belongs to the Special Issue Information and Entropy)

Abstract

:
Quantum entropy is a fundamental concept for quantum information recently developed in various directions. We will review the mathematical aspects of quantum entropy (entropies) and discuss some applications to quantum communication, statistical physics. All topics taken here are somehow related to the quantum entropy that the present authors have been studied. Many other fields recently developed in quantum information theory, such as quantum algorithm, quantum teleportation, quantum cryptography, etc., are totally discussed in the book (reference number 60).

1. Introduction

Theoretical foundation supporting today’s information-oriented society is Information Theory founded by Shannon [1] about 60 years ago. Generally, this theory can treat the efficiency of the information transmission by using measures of complexity, that is, the entropy, in the commutative system of signal space. The information theory is based on the entropy theory that is formulated mathematically. Before Shannon’s work, the entropy was first introduced in thermodynamics by Clausius and in statistical mechanics by Boltzmann. These entropies are the criteria to characterize a property of the physical systems. Shannon’s construction of entropy is a use of the discrete probability theory based on the idea that “information obtained from a system with a large vagueness has been highly profitable”, and he introduced (1) the entropy measuring the amount of information of the state of system and (2) the mutual entropy (information) representing the amount of information correctly transmitted from the initial system to the final system through a channel. This entropy theory agreed with the development of the probability theory, due to Kolmogorov, gives a mathematical foundation of the classical information theory with the relative entropy of two states by Kullback-Leibler [2] and the mutual entropy by Gelfand-Kolmogorov-Yaglom [3,4] on the continuous probability space. In addition, a channel of the discrete systems given by a transition probability was generalized to the integral kernel theory. The channel of the continuous systems is expressed as a state change on the commutative probability space by introducing the averaged operator by Umegaki and it is extended to the quantum channel (prob. measure) describing a state change in the noncommutative systems [5]. Since the present optical communication uses laser signal, it is necessary to construct new information theory dealing with those quantum quantities in order to discuss the efficiency of information transmission of optical communication processes rigorously. It is called the quantum information theory extending the important measures such as the entropy, the relative entropy and the mutual entropy formulated by Shannon et al into the quantum systems. The study of the entropy in quantum system was begun by von Neumann [6] in 1932, and the quantum relative entropy was introduced by Umegaki [7], and it is extended to general quantum system by Araki [8,9] , Uhlmann [10] and Donald [11]. In the quantum information theory, one of the important subjects is to examine how much information correctly carried through a channel, so that it is necessary to extend the mutual entropy of the classical system to the quantum system.
The mutual entropy in the classical system is defined by the joint probability distribution between input and output systems. However, the joint probability does not exist generally (see [12]) in quantum systems. The compound state devised in [13,14] gives a key solving this problem. It is defined through the Schatten decomposion [15] (one dimensional orthogonal decomposition) of the input state and the quantum channel. Ohya introduced the quantum mutual entropy based on the compound state in 1983 [13,16]. Since it satisfies Shannon’s inequalities, it describes the amount of information correctly transmitted from input system through a quantum channel. By using fundamental entropies such as the von Neumann entropy and the Ohya mutual entropy, the complete quantum version of Shannon’s information theory was formulated.
The quantum entropy for a density operator was defined by von Neumann [6] about 20 years before the Shannon entropy appeared. The properties of entropy are summarized in [17]. Main properties of the quantum relative entropy are taken from the articles [8,9,10,11,17,18,19,20]. The quantum mutual entropy was introduced by Holevo, Livitin, Ingarden [21,22] for classical input and output passing through a possible quantum channel. The complete quantum mechanical mutual entropy was defined by Ohya [13], and its generalization to C*-algebra was done in [23]. The applications of the mutual entropy have been been studied in various fields [16,24,25,26,27,28,29]. The applications of channel were given in [13,16,30,31,32,33].
Concerning quantum communication, the following studies have been done. The characterization of quantum communication or stochastic procesess is discussed and the beam splitting was rigorously studied by Fichtner, Freutenberg and Liebscher [34,35]. The transition expectation was introduced by Accardi [36] to study quantum Markov process [37]. The noisy optical channel was discussed in [28]. In quantum optics, a linear amplifier have been discussed by several authours [38,39], and its rigorous expression given here is in [30]. The channel capacities are discussed here based on the papers [40,41]. The bound of the capacity has been studied by first Holevo [42] and many others [39,41,43].
The entangled state is an important concept for quantum theory and it has been studied recently by several authors and its rigorous mathematical study was given in [44,45].
Let us comment general entropies of states in C*-dynamical systems. The C*-entropy was introduced in [23] and its property is discussed in [25,26,46]. The relative entropy for two general states was introduced by Araki [8,9] in von Neumann algebra and Uhlmann [10] in *-algebra. The mutual entropy in C*-algebra was introduced by Ohya [16]. Other references of quantum entropy is totally discussed in a book [17].
The classical dynamical (or Kolmogorov-Sinai) entropy S ( T ) [47] for a measure preserving transformation T was defined on a message space through finite partitions of the measurable space.
The classical coding theorems of Shannon are important tools to analyse communication processes which have been formulated by the mean dynamical entropy and the mean dynamical mutual entropy. The mean dynamical entropy represents the amount of information per one letter of a signal sequence sent from an input source, and the mean dynamical mutual entropy does the amount of information per one letter of the signal received in an output system.
The quantum dynamical entropy (QDE) was studied by Connes-Størmer [48], Emch [49], Connes-Narnhofer-Thirring [50], Alicki-Fannes [51], and others [52,53,54,55,56]. Their dynamical entropies were defined in the observable spaces. Recently, the quantum dynamical entropy and the quantum dynamical mutual entropy were studied by the present authors [16,29]: (1) The dynamical entropy is defined in the state spaces through the complexity of Information Dynamics [57]. (2) It is defined through the quantum Markov chain (QMC) was done in [58]. (3) The dynamical entropy for a completely positive (CP) maps was introduced in [59].
In this review paper, we give an overview of the entropy theory mentioned above. Their details of are discussed in the following book [17,60].

2. Setting of Quantum Systems

We first summarize mathematical description of both classical and quantum systems.
(1) Classical System: Let M ( Ω ) be the set of all real random variables on a probability measure space Ω , F , μ and P ( Ω ) be the set all probability measures on a measurable space Ω , F . f M ( Ω ) and μ P ( Ω ) represent a observable and a state of classical systems, respectively. The expectation value of the observable f M ( Ω ) with respect to a state μ P ( Ω ) is given by f d μ .
(2) Usual Quantum Systems: We denote the set of all bounded linear operators on a Hilbert space H by B ( H ) , and the set of all density operators on H by S ( H ) . A hermite operator A B ( H ) and ρ S ( H ) denote an observable and a state of usual quantum systems. The expectation value of the observable A B ( H ) with respect to a state ρ S ( H ) is obtained by t r ρ A .
(3) General Quantum System: More generally, let A be a C*-algebra (i.e., complex normed algebra with involution * such that A = A * , A * A = A 2 and complete w.r.t. · ) and S ( A ) be the set of all states on A (i.e., positive continuous linear functionals φ on A such that φ ( I ) = 1 if the unit I is in A ).
If Λ : A B is a unital map from a C*-algebra A to a C*-algebra B then its dual map Λ * : S ( B ) S ( A ) is called the channel. That is, tr ρ Λ ( A ) = tr Λ * ρ A ρ S ( B ) , A A . Remark that the algebra B sometimes will be denoted A ¯ .
Such algebraic approach contains both classical and quantum theories. The description of a classical probabilistic system (CDS), a usual quantum dynamical system(QDS) and a general quantum dynamical system(GQDS) are given in the following Table:
Table. 1.1. Descriptions of CDS, QDS and GQDS.
Table. 1.1. Descriptions of CDS, QDS and GQDS.
CDSQDSGQDS
real r.v.Hermitian operatorself-adjoint element
observableinA on H A in
M ( Ω ) (self adjoint operatorC*-algebra A
in B ( H ) )
stateprobability measuredensity operatorp.l.fnal φ S
μ P ( Ω ) ρ on H with φ ( I ) = 1
expectation Ω f d μ tr ρ A φ ( A )

3. Communication Processes

We discuss the quantum communication processes in this section.
Let M be the infinite direct product of the alphabet A : M = A Z Π A calling a message space. A coding is a one to one map Ξ from M to some space X which is called the coded space. This space X may be a classical object or a quantum object. For a quantum system, X may be a space of quantum observables on a Hilbert space H , then the coded input system is described by B ( H ) , S ( H ) . The coded output space is denoted by X and the decoded output space M is made by another alphabets. An transmission (map) Γ from X to X (actually its dual map as discussed below) is called a channel, which reflects the property of a physical device. With a decoding Ξ , the whole information transmission process is written as
M Ξ X Γ X Ξ M
That is, a message m M is coded to Ξ m and it is sent to the output system through a channel Γ, then the output coded message becomes Γ Ξ m and it is decoded to Ξ Γ Ξ m at a receiver.
Then the occurrence probability of each message in the sequence m 1 , m 2 , , m N of N messages is denoted by ρ = p k , which is a state in a classical system. If Ξ is a quantum coding, then Ξ m is a quantum object (state) such as a coherent state. Here we consider such a quantum coding, that is, Ξ m k is a quantum state, and we denote Ξ m k by σ k . Thus the coded state for the sequence m 1 , m 2 , · · · , m N is written as
σ = k p k σ k
This state is transmitted through the dual map of Γ which is called a channel in the sequel. This channel (the dual of Γ ) is expressed by a completely positive mapping Γ * , in the sense of Chapter 5 , from the state space of X to that of X , hence the output coded quantum state σ is Γ * σ . Since the information transmission process can be understood as a process of state (probability) change, when Ω and Ω are classical and X and X are quantum, the process is written as
P Ω Ξ * S H Γ * S ( H ) Ξ * P ( Ω )
where Ξ * (resp. Ξ * ) is the channel corresponding to the coding Ξ (resp. Ξ ) and S H (resp. S ( H ˜ ) ) is the set of all density operators (states) on H (resp. H ).
We have to be care to study the objects in the above transmission process. Namely, we have to make clear which object is going to study. For instance, if we want to know the information of a quantum state through a quantum channel Γ(or Γ * ) , then we have to take X so as to describe a quantum system like a Hilbert space and we need to start the study from a quantum state in quantum space X not from a classical state associated to messages. We have a similar situation when we treat state change (computation) in quantum computer.

4. Quantum Entropy for Density Operators

The entropy of a quantum state was introduced by von Neumann. This entropy of a state ρ is defined by
S ( ρ ) = tr ρ log ρ
For a state ρ , there exists a unique spectral decomposition
ρ = Σ k μ k P k
where μ k is an eigenvalue of ρ and P k is the associated projection for each μ k . The projection P k is not one-dimensional when μ k is degenerated, so that the spectral decomposition can be further decomposed into one-dimensional projections. Such a decomposition is called a Schatten decomposition, namely,
ρ = Σ k j μ k j E k j
where E k j is the one-dimensional projection associated with μ k and the degenerated eigenvalue μ k repeats dim P k times; for instance, if the eigenvalue μ 1 has the degeneracy 3, then μ 11 = μ 12 = μ 13 < μ 2 . To simplify notations we shall write the Schatten decomposition as
ρ = Σ k μ k E k
where the numbers μ k form a probability distribution μ k :
k μ k = 1 , μ k 0
This Schatten decomposition is not unique unless every eigenvalue is non-degenerated. Then the entropy (von Neumann entropy) S ρ of a state ρ equals to the Shannon entropy of the probability distribution μ k :
S ρ = k μ k log μ k
Therefore the von Neumann entropy contains the Shannon entropy as a special case.
Let us summarize the fundamental properties of the entropy S ( ρ ) .
Theorem 1 
For any density operator ρ S ( H ) , the followings hold:
(1) 
Positivity : S ( ρ ) 0 .
(2) 
Symmetry : Let ρ = U ρ U * for an unitary operator U. Then
S ( ρ ) = S ( ρ )
(3) 
Concavity : S ( λ ρ 1 + ( 1 λ ) ρ 2 ) λ S ( ρ 1 ) + ( 1 λ ) S ( ρ 2 ) for any ρ 1 , ρ 2 S ( H ) and λ [ 0 , 1 ] .
(4) 
Additivity : S ( ρ 1 ρ 2 ) = S ( ρ 1 ) + S ( ρ 2 ) for any ρ k S ( H k ) .
(5) 
Subadditivity : For the reduced states ρ 1 , ρ 2 of ρ S ( H 1 H 2 ) ,
S ( ρ ) S ( ρ 1 ) + S ( ρ 2 )
(6) 
Lower Semicontinuity : If ρ n ρ 1 0 ( tr | ρ n ρ | 0 ) as n , then
S ( ρ ) lim n inf S ( ρ n )
(7) 
Continuity : Let ρ n , ρ be elements in S ( H ) satisfying the following conditions : (i) ρ n ρ weakly as n , (ii) ρ n A ( n ) for some compact operator A, and (iii) k a k log a k < + for the eigenvalues { a k } of A. Then S ( ρ n ) S ( ρ ) .
(8) 
Strong Subadditivity : Let H = H 1 H 2 H 3 and denote the reduced states tr H i H j ρ by ρ k and tr H k ρ by ρ i j . Then S ( ρ ) + S ( ρ 2 ) S ( ρ 12 ) + S ( ρ 23 ) and S ( ρ 1 ) + S ( ρ 2 ) S ( ρ 13 ) + S ( ρ 23 ) .
(9) 
Entropy increasing: (i) Let H be finite dimensional space. If the channel Λ * is unital, that is, for the dual map Λ of Λ * satisfies Λ I = I , then S ( Λ * ρ ) S ( ρ ) . (ii) For arbitrary Hilbert space H , if the dual map Λ of the channel Λ * satisfies Λ ρ S ( H ) , then S ( Λ * ρ ) S ( ρ ) .
In order to prove Theorem, we need the following lemma.
Lemma 2 
Let f be a convex C 1 function on a proper domain and ρ , σ S ( H ) . Then
(1) 
Klein’s inequality: tr { f ( ρ ) f ( σ ) ( ρ σ ) f ( σ ) } 0 .
(2) 
Peierls inequality: k f ( < x k , ρ x k > ) tr f ( ρ ) for any CONS { x k } in H . (Remark: ρ = n μ n E n f ( ρ ) = n f ( μ n ) E n . )

5. Relative Entropy for Density Operators

For two states ρ , σ S ( H ) , the relative entropy is first defined by Umegaki
S ( ρ , σ ) = tr ρ log ρ log σ ρ σ otherwise
where ρ σ means that tr σ A = 0 tr ρ A = 0 for A 0 r a n ρ ¯ r a n σ ¯ . Main properties of relative entropy are summarized as:
Theorem 3 
The relative entropy satisfies the following properties:
(1) 
Positivity: S ( ρ , σ ) 0 , = 0 iff ρ = σ .
(2) 
Joint Convexity : S ( λ ρ 1 + ( 1 λ ) ρ 2 , λ σ 1 + ( 1 λ ) σ 2 ) λ S ( ρ 1 , σ 1 ) + ( 1 λ ) S ( ρ 2 , σ 2 ) for any λ [ 0 , 1 ] .
(3) 
Additivity : S ( ρ 1 ρ 2 , σ 1 σ 2 ) = S ( ρ 1 , σ 1 ) + S ( ρ 2 , σ 2 ) .
(4) 
Lower Semicontinuity : If lim n ρ n ρ 1 = 0 and lim n σ n σ 1 = 0 , then S ( ρ , σ ) lim inf n S ( ρ n , σ n ) . Moreover, if there exists a positive number λ satisfying ρ n λ σ n , then lim n S ( ρ n , σ n ) = S ( ρ , σ ) .
(5) 
Monotonicity : For a channel Λ * from S to S ¯ , S ( Λ * ρ , Λ * σ ) S ( ρ , σ ) .
(6) 
Lower Bound : ρ σ 2 / 2 S ( ρ , σ ) .
(7) 
Invaiance under the unitary mapping: S ( U ρ U * ; U σ U * ) = S ( ρ ; σ ) where U is a unitary operator.
Let us extend the relative entropy to two positive operators instead of two states. If A and B are two positive Hermitian operators (not necessarily the states, i.e. not necessarily with unit traces) then we set
S ( A , B ) = tr A log A log B
The following Bogoliubov inequality holds [41].
Theorem 4 
One has
S ( A , B ) tr A log tr A log tr B
This inequality gives us the upper bound of the channel capacity [41].

6. Channel and Lifting

The concept of the channel plays an important role for mathematical description of the quantum communication. The attenuation channel is one of the most important models to discuss in quantum optical communication [13] . Moreover, there exists a special channel named ”lifting”, and it is useful to characterize quantum communication or stochastic procesess. Here, we briefly review the definition and fundamental properties of quantum channel and lifting [17,28,29,60].

6.1. Quantum Channels

A general quantum system containing all systems such as discrete and continuous in both classical and quantum is described by a C*-algebra or a von Neumann algebra, so that we discuss the channeling transformation in C*-algebaric contexts. However it is enough for the readers who are not familiar with C*-algebra to imagine usual quantum system, for instace, regard A and S ( A ) below as B ( H ) and S ( H ) , respectively. Let A and A ¯ be C*-algebras and S ( A ) and S ( A ¯ ) be the set of all states on A and A ¯ .
A channel is a mapping from S ( A ) to S ( A ¯ ) . There exist channels with various properties.
Definition 5 
Let ( A , S ( A ) ) be an input system and ( A ¯ , S ( A ¯ ) ) be an output system. Take any φ , ψ S ( A ) .
(1)
Λ * is linear if Λ * ( λ φ + ( 1 λ ) ψ ) = λ Λ * φ + ( 1 λ ) Λ * ψ holds for any λ [ 0 , 1 ] .
(2)
Λ * is completely positive (CP) if Λ * is linear and its dual Λ : A ¯ A satisfies
i , j = 1 n A i * Λ ( A ¯ i * A ¯ j ) A j 0
for any n N and any { A ¯ i } A ¯ , { A i } A .
(3)
Λ * is Schwarz type if Λ ( A ¯ * ) = Λ ( A ¯ ) * and Λ * ( A ¯ ) * Λ * ( A ¯ ) Λ * ( A ¯ * A ¯ ) .
(4)
Λ * is stationary if Λ α t = α ¯ t Λ for any t R .
(Here α t and α ¯ t are groups of automorphisms of the algebra A and A ¯ respectively.)
(5)
Λ * is ergodic if Λ * is stationary and Λ * ( e x I ( α ) ) e x I ( α ¯ ) .
(Here e x I ( α ) is the set of extreme points of the set of all stationary states I ( α ) .)
(6)
Λ * is orthogonal if any two orthogonal states φ 1 , φ 2 S ( A ) (denoted by φ 1 φ 2 ) implies Λ * φ 1 Λ * φ 2 .
(7)
Λ * is deterministic if Λ * is orthogonal and bijective.
(8)
For a subset S 0 of S ( A ) , Λ * is chaotic for S 0 if Λ * φ 1 = Λ * φ 2 for any φ 1 , φ 2 S 0 .
(9)
Λ * is chaotic if Λ * is chaotic for S ( A ) .
(10)
Stinespring-Sudarshan-Kraus representation: a completely positive channel Λ * can be represented as
Λ * ρ = i A i ρ A i * , i A i * A i 1
Here A i are bounded operators in H.
Most of channels appeared in physical processes are CP channels. Examples of such channels are the followings: Take a density operator ρ S ( H ) as an input state.
(1) Unitary evolution: Let H be the Hamiltonian of a system.
ρ Λ * ρ = U t ρ U t *
where t R , U t = exp ( i t H ) .
(2) Semigroup evolution: Let V t ( t R + ) be an one parameter semigroup on H .
ρ Λ * ρ = V t ρ V t * , t R +
(3) Quantum measurement: If a measuring apparatus is prepared by an positive operator valued measure Q n then the state ρ changes to a state Λ * ρ after this measurement,
ρ Λ * ρ = n Q n ρ Q n
(4) Reduction: If a system Σ 1 interacts with an external system Σ 2 described by another Hilbert space K and the initial states of Σ 1 and Σ 2 are ρ and σ, respectively, then the combined state θ t of Σ 1 and Σ 2 at time t after the interaction between two systems is given by
θ t U t ( ρ σ ) U t *
where U t = exp ( i t H ) with the total Hamiltonian H of Σ 1 and Σ 2 . A channel is obtained by taking the partial trace w.r.t. K such as
ρ Λ * ρ tr K θ t
(5) Optical communication processes: Quantum communication process is described by the following scheme [13].
l l l Noise ν S ( K 1 ) S ( H 1 ) ρ ρ ¯ = Λ * ρ S ( H 2 ) Loss S ( K 2 )
l l l S H 1 Λ * S H 2 γ * a * S H 1 K 1 π * S H 2 K 2
The above maps γ * , a * are given as
γ * ρ = ρ ν , ρ S H 1 a * θ = tr K 2 θ , θ S H 2 K 2
where ν is a noise coming from the outside of the system. The map π * is a certain channel determined by physical properties of the device transmitting information. Hence the channel for the above process is given as Λ * ρ tr K 2 π * ρ ν = a * π * γ * ρ .
(6)Attenuation process: Based on the construction of the optical communication processes of (5), the attenuation channel is defined as follows [13]: Take ν 0 = 0 0 = vacuum state and π 0 * · V 0 · V 0 * given by
V 0 n 0 j = 0 n C j n j n j C j n n ! j ! n j ! α j β n j , α 2 + β 2 = 1
Then the output state of the attenuation channel Λ 0 * is obtained by
Λ 0 * ρ tr K 2 π 0 * ρ ν = tr K 2 π 0 * ρ 0 0
η = α 2 ( 0 η 1 ) is called a transmission rate of the attenuation channel Λ 0 * . In particular, for a coherent input state ρ = θ θ , one has
π 0 * θ θ 0 0 = α θ α θ β θ β θ
which is called a beam splitting operator.
(7) Noisy optical channel: Based on (5), the noisy optical channel is defined as follows [28]: Take a noise state ν = m 1 m 1 , m 1 photon number state of K 1 and a linear mapping V : H 1 K 1 H 2 K 2 as
V n 1 m 1 j = 0 n 1 + m 1 C j n 1 , m 1 j n 1 + m 1 j
with
C j n 1 , m 1 r = L K 1 n 1 + j r n 1 ! m 1 ! j ! n 1 + m 1 j ! r ! n 1 j ! j r ! m 1 j + r ! α m 1 j + 2 r β n 1 + j 2 r
where
K = min n 1 , j , L max m 1 j , 0
Then the output state of the noisy optical channel Λ * is defined by
Λ * ρ tr K 2 π * ρ ν = tr K V ρ m 1 m 1 V *
for the input state ρ S H 1 . In particular, for a coherent input state ρ = θ θ and a coherent noise state ν = γ γ , we obtain
π * θ θ γ γ = α θ + β γ α θ + β γ β θ + α γ β θ + α γ
which is called a generalized beam splitting operator.

6.2. Liftings

There exists a special channel named ”lifting”, and it is a useful concept to characterize quantum communication or stochastic procesess. It can be a mathematical tool to describe a process in quantum algorithm, so that we will explain its foundation here.
Definition 6 
Let A 1 , A 2 be C*-algebras and let A 1 A 2 be a fixed C*-tensor product of A 1 and A 2 . A lifting from A 1 to A 1 A 2 is a weak *-continuous map
E * : S ( A 1 ) S ( A 1 A 2 )
If E * is affine and its dual is a completely positive map, we call it a linear lifting; if it maps pure states into pure states, we call it pure.
The algebra A 2 can be that of the output, namely, A ¯ above. Note that to every lifting from A 1 to A 1 A 2 we can associate two channels: one from A 1 to A 1 , defined by
Λ * φ 1 ( a 1 ) ( E * φ 1 ) ( a 1 1 ) a 1 A 1
another from A 1 to A 2 , defined by
Λ * φ 1 ( a 2 ) ( E * φ 1 ) ( 1 a 2 ) a 2 A 2
In general, a state φ S ( A 1 A 2 ) such that
φ A 1 1 = φ 1 , φ 1 A 2 = φ 2
is called a compound state of the states φ 1 S ( A 1 ) and φ 2 S ( A 2 ) . In classical probability theory, also the term coupling between φ 1 and φ 2 is used.
The following problem is important in several applications: Given a state φ 1 S ( A 1 ) and a channel Λ * : S ( A 1 ) S ( A 2 ) , find a standard lifting E * : S ( A 1 ) S ( A 1 A 2 ) such that E * φ 1 is a compound state of φ 1 and Λ * φ 1 . Several particular solutions of this problem have been proposed by Ohya [13,14], Ceccini and Petz [63], however an explicit description of all the possible solutions to this problem is still missing.
Definition 7 
A lifting from A 1 to A 1 A 2 is called nondemolition for a state ρ 1 S ( A 1 ) if ρ 1 is invariant for Λ * i.e., if for all a 1 A 1
( E * φ 1 ) ( a 1 1 ) = φ 1 ( a 1 )
The idea of this definition being that the interaction with system 2 does not alter the state of system 1.
Definition 8 
Let A 1 , A 2 be C*-algebras and let A 1 A 2 be a fixed C*-tensor product of A 1 and A 2 . A transition expectation from A 1 A 2 to A 1 is a completely positive linear map E * : A 1 A 2 A 1 satisfying
E * ( 1 A 1 1 A 2 ) = 1 A 1
An input signal is transmitted and received by an apparatus which produces an output signal. Here A 1 (resp. A 2 ) is interpreted as the algebra of observables of the input (resp. output) signal and E describes the interaction between the input signal and the receiver as well as the preparation of the receiver. If φ 1 S ( A 1 ) is the input signal, then the state Λ * φ 1 S ( A 2 ) is the state of the (observed) output signal. Therefore in the reduction dynamics discussed before, the correspondence from a state ρ to the interacting state θ t U t ( ρ ρ ) U t * gives us a time dependent lifting.
Further another important lifting related to this signal transmission is one due to a quantum communication process discussed above. In several important applications, the state φ 1 of the system before the interaction (preparation, input signal) is not known and one would like to know this state knowing only Λ * φ 1 S ( A 2 ) , i.e., the state of the apparatus after the interaction (output signal). From a mathematical point of view this problem is not well posed, since the map Λ * is usually not invertible. The best one can do in such cases is to acquire a control on the description of those input states which have the same image under Λ * and then choose among them according to some statistical criterion.
In the following we rewrite some communication processes by using liftings.
Example 9 (1) : Isometric lifting. 
Let V : H 1 H 1 H 2 be an isometry
V * V = 1 H 1
Then the map
E : x B ( H 1 ) B ( H 2 ) V * x V B ( H 1 )
is a transition expectation in the sense of Accardi, and the associated lifting maps a density matrix w 1 on H 1 into E * w 1 = V w 1 V * on H 1 H 2 . Liftings of this type are called isometric. Every isometric lifting is a pure lifting, which is applied to some of quantum algorithm such as Shor’s.
These extend linearly to isometry, and their isometric liftings are neither of convex product type nor nondemolition type.
Example 10 (2) : The compound lifting. 
Let Λ * : S ( A 1 ) S ( A 2 ) be a channel. For any φ 1 S ( A 1 ) in the closed convex hull of the external states, fix a decomposition of φ 1 as a convex combination of extremal states in S ( A 1 )
φ 1 = S ( A 1 ) ω 1 d μ
where μ is a Borel measure on S ( A 1 ) with support in the extremal states, and define
E 1 * φ 1 S ( A 1 ) ω 1 Λ * ω 1 d μ
Then E * : S ( A 1 ) S ( A 1 A 2 ) is a lifting, nonlinear even if Λ * is linear, and it is a nondemolition type. The most general lifting, mapping S ( A 1 ) into the closed convex hull of the extermal product states on A 1 A 2 is essentially of this type. This nonlinear nondemolition lifting was first discussed by Ohya to define the compound state and the mutual entropy as explained before. However the above is a bit general because we shall weaken the condition that μ is concentrated on the extremal states.
Therefore once a channel is given, by which a lifting of convex product type can be constructed. For example, the von Neumann quantum measurement process is written, in the terminology of lifting, as follows: Having measured a compact observable A = n a n P n (spectral decomposition with n P n = I ) in a state ρ, the state after this measurement will be
Λ * ρ = n P n ρ P n
and a lifting E * , of convex product type, associated to this channel Λ * and to a fixed decomposition of ρ as ρ = n μ n ρ n ( ρ n S ( A 1 ) ) is given by :
E * ρ = n μ n ρ n Λ * ρ n
Before closing this section, we reconsider noisy channel, attenutaion channel and amplifier process (lifting) in optical communication.
Example 11 (3) : 
The attenuation (or beam splitting) lifting. 
It is the particular isometric lifting characterized by the properties.
H 1 = H 2 = : Γ ( C ) ( Fock space over C ) = L 2 R
V : Γ ( C ) Γ ( C ) Γ ( C )
is characterized by the expression
V θ = α θ β θ
where θ is the normalized coherent vector parametrized by θ C and α , β C are such that
| α | 2 + | β | 2 = 1
Notice that this liftings maps coherent states into products of coherent states. So it maps the simplex of the so called classical states (i.e., the convex combinations of coherent vectors) into itself. Restricted to these states it is of convex product type explained below, but it is not of convex product type on the set of all states.
Denoting, for θ C , ω θ the coherent state on B ( Γ ( C ) ) , namely,
ω θ ( b ) = θ , b θ ; b B ( Γ ( C ) )
then for any b B ( Γ ( C ) )
( E * ω θ ) ( b 1 ) = ω α θ ( b )
so that this lifting is not nondemolition. These equations mean that, by the effect of the interaction, a coherent signal (beam) θ splits into 2 signals (beams) still coherent, but of lower intensity, but the total intensity (energy) is preserved by the transformation.
Finally we mention two important beam splitting which are used to discuss quantum gates and quantum teleportation [64,65].
(1) Superposed beam splitting:
V s θ 1 2 ( α θ β θ i β θ α θ ) = V 0 1 2 θ 0 i 0 θ
(2) Beam splitting with two inputs and two output: Let θ and γ be two input coherent vectors. Then
V d θ γ α θ + β γ β θ + α γ = V θ γ
Example 12 (4) Amplifier channel: 
To recover the loss, we need to amplify the signal (photon). In quantum optics, a linear amplifier is usually expressed by means of annihilation operators a and b on H and K , respectively :
c = G a I + G 1 I b *
where G ( 1 ) is a constant and c satisfies CCR i.e., [ c , c * ] = I ) on H K . This expression is not convenient to compute several informations like entropy. The lifting expression of the amplifier is good for such use and it is given as follows:
Let c = μ a I + ν I b with μ 2 ν 2 = 1 and γ be the eigenvector of c : c γ = γ γ . For two coherent vectors θ on H and θ on K , γ can be written by the squeezing expression : γ = θ θ ; μ , ν and the lifting is defined by an isometry
V θ θ = θ θ ; μ , ν
such that
E * ρ = V θ ρ V θ * ρ S H
The channel of the amplifier is
Λ * ρ = tr K E * ρ

7. Quantum Mutual Entropy

Quantum relative entropy was introduced by Umegeki and generalized by Araki, Uhlmann. Then a quantum analogue of Shannon’s mutual entropy was considered by Levitin, Holevo, Ingarden for classical input and output passing through a possible quantum channel, in which case, as discussed below, the Shannon theory is essentially applied. Thus we call such quantum mutual entropy semi-quantum mutual entropy in the sequel. The fully quantum mutual entropy, namely, for quantum input and quantum output with quantum channel, was introduced by Ohya, which is called the quantum mutual entropy. It could be generalized to a general quantum system described by a C*-algebra.
The quantum mutual entropy clearly contains the semi-quantum mutual entropy as shown below. We mainly discuss the quantum mutual entropy in usual quantum system described by a Hilbert space, and its generalization to C * -systems will be explained briefly for future use (e.g., relativistic quantum information ) in the last section of this Chapter. Note that the general mutual entropy contains all other cases including the measure theoretic definition of Gelfand and Yaglom.
Let H be a Hilbert space for an input space, and an output space is described by another Hilbert space H , often one takes H = H . A channel from the input system to the output system is a mapping Λ * from S ( H ) to S ( H ) .
An input state ρ S ( H ) is sent to the output system through a channel Λ * , so that the output state is written as ρ Λ * ρ . Then it is important to investigate how much information of ρ is correctly sent to the output state Λ * ρ . This amount of information transmitted from input to output is expressed by the mutual entropy (or mutual information).
The quantum mutual entropy was introduced on the basis of von Neumann entropy for purely quantum communication processes. The mutual entropy depends on an input state ρ and a channel Λ * , so it is denoted by I ρ ; Λ * , which should satisfy the following conditions:
(1) The quantum mutual entropy is well-matched to the von Neumann entropy. That is, if a channel is trivial, i.e., Λ * = identity map, then the mutual entropy equals to the von Neumann entropy: I ρ ; i d = S ρ .
(2) When the system is classical, the quantum mutual entropy reduces to classical one.
(3) Shannon’s type fundamental inequality 0 I ρ ; Λ * S ρ is held.
In order to define the quantum mutual entropy, we need the quantum relative entropy and the joint state (it is called ”compound state” in the sequel) describing the correlation between an input state ρ and the output state Λ * ρ through a channel Λ * . A finite partition of Ω in classical case corresponds to an orthogonal decomposition E k of the identity operator I of H in quantum case because the set of all orthogonal projections is considered to make an event system in a quantum system. It is known that the following equality holds
sup k tr ρ E k log tr ρ E k ; E k = tr ρ log ρ
and the supremum is attained when E k is a Schatten decomposition of ρ = k μ k E k . Therefore the Schatten decomposition is used to define the compound state and the quantum mutual entropy.
The compound state Φ E (corresponding to joint state in classical systems) of ρ and Λ * ρ was introduced by Ohya in 1983. It is given by
Φ E = k μ k E k Λ * E k
where E stands for a Schatten decomposition E k of ρ so that the compound state depends on how we decompose the state ρ into basic states (elementary events), in other words, how to see the input state. It is easy to see that tr Φ E = 1 , Φ E > 0 .
Applying the relative entropy S , for two compound states Φ E and Φ 0 ρ Λ * ρ (the former includes a certain correlation of input and output and the later does not), we can define the Ohya’s quantum mutual entropy (information) as
I ρ ; Λ * = sup S Φ E , Φ 0 ; E = E k
where the supremum is taken over all Schatten decompositions of ρ because this decomposition is not always unique unless every eigenvalue of ρ is not degenerated. Some computations reduce it to the following form for a linear channel.
Theorem 13 
We have
I ρ ; Λ * = sup k μ k S Λ * E k , Λ * ρ ; E = E k
It is easy to see that the quantum mutual entropy satisfies all conditions (1)∼(3) mentioned above.
When the input system is classical, an input state ρ is given by a probability distribution or a probability measure. In either case, the Schatten decomposition of ρ is unique, namely, for the case of probability distribution ; ρ = μ k ,
ρ = k μ k δ k
where δ k is the delta measure, that is,
δ k j = δ k , j = 1 ( k = j ) 0 ( k j ) , j
Therefore for any channel Λ * , the mutual entropy becomes
I ρ ; Λ * = k μ k S Λ * δ k , Λ * ρ
which equals to the following usual expression when one of the two terms is finite for an infinite dimentional Hilbert space:
I ρ ; Λ * = S Λ * ρ k μ k S Λ * δ k
The above equality has been taken by Levitin and Holevo (LH for short in the sequel), which is one associated with a classical-quantum channel. Thus the Ohya’s quantum mutual entropy (we call it the quantum mutual entropy in the sequel) contains the LH quantum mutual entropy (we call it the semi-quantum mutual entropy in the sequel) as a special one.
Note that the definition of the quantum mutual entropy might be written as
I F ρ ; Λ * = sup k μ k S Λ * ρ k , Λ * ρ ; ρ = k μ k ρ k F ρ
where F ρ is the set of all orthogonal finite decompositions of ρ . Here ρ k is orthogonal to ρ j (denoted by ρ k ρ j ) means that the range of ρ k is orthogonal to that of ρ j . We briefly explain this equality in the next theorem.
Theorem 14 
One has I ρ ; Λ * = I F ρ ; Λ * .
Moreover the following fundamental inequality follows from the monotonicity of relative entropy :
Theorem 15 (Shannon’s inequality) 
0 I ( ρ ; Λ * ) min { S ( ρ ) , S ( Λ * ρ ) }
For given two channels Λ 1 * and Λ 2 * , one has the quantum data processing inequality. That is,
S ( ρ ) I ( ρ , Λ 1 * ) I ( ρ , Λ 2 * Λ 1 * )
The second inequality follows from monotonicity of the relative entropy.
This is analogous to the classical data processing inequality for a Markov process X Y Z :
S ( X ) I ( X , Y ) I ( X , Z )
where I ( X , Y ) is the mutual information between random variables X and Y .
The mutual entropy is a measure for not only information transmission but also description of state change, so that this quantity can be applied to several topics in quantum dynamics. It can be also applied to some topics in quantum computer or computation to see the ability of information transmission.

8. Some Applications to Statistical Physics

8.1. Ergodic theorem

We have an ergodic type theorem with respect to quantum mutual entropy.
Theorem 16 
Let a state φ be given by φ · = tr ρ · .
(1) 
If a channel Λ * is deterministic, then I ( ρ ; Λ * ) = S ( ρ ) .
(2) 
If a channel Λ * is chaotic, then I ( ρ ; Λ * ) = 0 .
(3) 
If ρ is a faithful state and the every eigenvalue of ρ is nondegenerate, then I ( ρ ; Λ * ) = S ( Λ * ρ ) .
(Remark: Here ρ is said to be faithful if tr ρ A * A = 0 implies A = 0 )

8.2. CCR and channel

We discuss the attenuation channel in the context of the Weyl algebra.
Let T be a symplectic transformation of H to H K , i.e., σ ( f , g ) = σ ( T f , T g ) . Then there is a homomorphism α T : C C R ( H ) C C R ( H K ) such that
α T ( W ( f ) ) = W ( T f )
We may regard the Weyl algebra C C R ( H K ) as C C R ( H ) C C R ( K ), and given a state ψ on CCR( H ), a channeling transformation arises as
( Λ ω ) ( A ) = ( ω ψ ) ( α T ( A ) )
where the input state ω is an arbitrary state of C C R ( H ) and A C C R ( H ) (this ψ is a noise state above). To see a concrete example discussed in [13], we choose H = K , ψ = φ and
F ( ξ ) = a ξ b ξ
If | a | 2 + | b | 2 = 1 holds for the numbers a and b, this F is an isometry, and a symplectic transformation, and we arrive at the channeling transformation
( Λ ω ) W ( g ) = ω ( W ( a g ) ) e 1 2 b g 2 ( g H )
In order to have an alternative description of Λ in terms of density operators acting of Γ ( H ) we introduce the linear operator V : Γ ( H ) Γ ( H ) Γ ( H ) defined by
V ( π F ( A ) ) Φ = π F ( α T ( A ) ) Φ Φ
we have
V ( π F ( W ( f ) ) ) Φ = ( π F ( W ( a f ) ) π F ( W ( b f ) ) ) Φ Φ
hence
V Φ f = Φ a f Φ b f
Theorem 17 
Let ω be a state of CCR(H) which has a density D in the Fock representation. Then the output state Λ * ω of the attenuation channel has density tr 2 V D V * in the Fock representation.
The lemma says that Λ * is really the same to the noisy channel with m = 0
We note that Λ , the dual of Λ * , is a so-called quasifree completely positive mapping of C C R ( H ) given as
Λ ( W ( f ) ) = W ( a f ) e 1 2 b f 2
Theorem 18 
If ψ is a regular state of C C R ( H ) , that is t ψ ( W ( t f ) ) is a continuous function on R for every f H , then
( Λ * ) n ( ψ ) φ
pointwise, where φ is a Fock state.
It is worth noting that the singular state
τ ( W ( f ) ) = 0 if f 0 1 if f = 0 ( 4 . 13 )
is an invariant state of CCR(H). On the other hand, the proposition applies to states possesing density operator in the Fock representation. Therefore, we have
Corollary 19 
Λ * regarded as a channel of B ( Γ ( H ) ) has a unique invariant state, the Fock state, and correspondingly Λ * is ergodic.
Λ * is not only ergodic but it is completely dissipative in the sense that Λ ( A * A ) = Λ ( A * ) Λ ( A ) may happen only in the trivial case when A is a multiple of the identity, which was discussed by M. Fannes and A. Verbeure. In fact,
Λ * = ( id ω ) α T
where α T is given by (1) and (3) and ω ( W ( f ) ) = exp ( b f 2 ) is a quasi-free state.

8.3. Irreversible processes

Irreversible phenomena can be treated by several different methods. One of them is due to the entropy change. However, it is difficult to explain the entropy change from reversible equations of motion such as Schrodinger equation, Liouville equation. Therefore we need some modifications to explain the irreversibility of nature:
(i) QM + "α", where α represents an
effect of noise, the coarse graining, etc.
Here we discuss some trials concerning (i) and (iii) above, essentially done in [16]. Let ρ be a state and Λ * , Λ t * be some channels. Then we ask
(1) ρ ρ ¯ = Λ * ρ S ( ρ ) S ( ρ ¯ ) ?
(2) ρ ρ t = Λ t * ρ t ρ ¯ lim t S ( ρ t ) S ( ρ ¯ ) ?
(3) Consider the change of I ( ρ ; Λ t * ) . ( I ( ρ ; Λ t * ) should be decreasing!)

8.4. Entropy change in linear response dynamics

We first discuss the entropy change in the linear response dynamics. Let H be a lower bounded Hamiltonian and take
U t = exp ( i t H ) , α t ( A ) = U t A U t
For a KMS state φ given by a density operator ρ and a perturbation λ V ( V = V * a , λ [ 0 , 1 ] ), the perturbed time evolution is defined by a Dyson series:
α t V A = n 0 i λ n d t 1 d t n 0 t 1 t n t × α t 1 V , , α t n V , α t V
and the perturbed state is
φ V ( A ) = φ ( W * A W ) φ ( W * W )
where
W = n 0 ( λ ) n d t 1 d t n 0 t 1 t n t α i t 1 ( V ) α i t n ( V )
The linear response time evolution and the linear response perturbed state are given by
α t V , 1 ( A ) = α t ( A ) + i λ 0 t d s α s ( V ) , α t ( A )
φ V , 1 ( A ) = φ ( A ) λ 0 1 d s φ ( A α i s ( V ) ) + λ φ ( A ) φ ( V )
This linear response perturbed state φ V , 1 is written as
φ V , 1 ( A ) = t r ρ V , 1 A
where
ρ V , 1 = I λ 0 1 α i s ( V ) d s + t r ρ V ρ
The linear response time dependent state is
ρ V , 1 t = α t V , 1 * ( ρ ) = I i λ 0 1 α s V d s + i λ 0 1 α s + i V d s ρ
Put
θ t ρ V , 1 t t r ρ V , 1 t , θ ρ V , 1 t r ρ V , 1
S ρ V , 1 t S θ t , S ρ V , 1 S θ
The change of the linear response entropy S ( ρ V , 1 ( t ) ) is shown in the following theorem [16].
Theorem 20 
If ρ V , 1 ( t ) goes to ρ V , 1 as t and S ( ρ ) < + , then S ( ρ V , 1 ( t ) ) S ( ρ V , 1 ) as t .
Remark: Even when α t V * ( ρ ) ρ V ( t ) , we always have
S ( α t V * ( ρ ) ) = S ( ρ ) S ( ρ V )
Concerning the entropy change in exact dynamics, we have the following general result [16]:
Theorem 21 
Let Λ * : S H S K be a channel satisfying
t r Λ ρ = t r ρ for any ρ S ( H )
Then
S ( ρ ) S ( Λ * ρ )

8.5. Time development of mutual entropy

Frigerio studied the approach to stationarity of an open system in [68]. Let an input A and an output A ¯ be a same von Neumann algebra and Λ ( R + ) = { Λ t ; t R + } be a dynamical semigroup (i.e., Λ t ( t R ) is a weak* continuous semigroup and Λ t * is a normal channel) on A having at least one faithful normal stationary state θ (i.e., Λ t * θ = θ for any t R + ). For this Λ ( R + ) , put
A Λ = A A ; Λ t ( A ) = A , t R +
and
A C = A A ; Λ t ( A * A ) = Λ t ( A * ) Λ t ( A ) , t R +
Then A Λ is a von Neumann subalgebra of A . Frigerio proved the following theorem [68].
Theorem 22 
(1) There exists a conditional expectation E from A to A Λ .
(2) When A = A Λ , for any normal states ω, Λ t * ω converges to a stationary state in the w*- sense.
From the above theorem, we obtain [16].
Theorem 23 
For a normal channel Λ * and a normal state φ, if a measure μ M φ ( S ) , is orthogonal and if A Λ = A C holds and A is type I, then I μ ( φ ; Λ t * ) decreases in time and approaches to I μ ( φ ; E * ) as t .
This theorem tells that the mutual entropy decreases with respect to time if the system is dissipative, so that the mutual entropy can be a measure for the irreversibility.

9. Entropies for General Quantum States

We briefly discuss some basic facts of the entropy theory for general quantum systems, which might be needed to treat communication (computation) process from general standing point, that is, independently from classical or quantum.
Let A , S ( A ) be a C*-system. The entropy (uncertainty) of a state φ S seen from the reference system, a weak *-compact convex subset of the whole state space S ( A ) on the C * -algebra A , was introduced by Ohya [16]. This entropy contains von Neumann’s entropy and classical entropy as special cases.
Every state φ S has a maximal measure μ pseudosupported on e x S (extreme points in S ) such that
φ = S ω d μ
The measure μ giving the above decomposition is not unique unless S is a Choquet simplex (i.e., for the set S ^ λ ω ; ω S , define an order such that φ 1 φ 2 iff φ 1 φ 2 S ^ , S is a Choquet simplex if S ^ is a lattice for this order), so that we denote the set of all such measures by M φ ( S ) . Take
D φ ( S ) μ M φ ( S ) ; { μ k } R + and { φ k } ex S s . t . k μ k = 1 , μ = k μ k δ ( φ k )
where δ ( φ ) is the delta measure concentrated on { φ } . Put
H ( μ ) = k μ k log μ k
for a measure μ D φ ( S ) .
Definition 24 
The entropy of a general state φ S w.r.t. S is defined by
S S ( φ ) = inf { H ( μ ) ; μ D φ ( S ) } D φ ( S ) D φ ( S ) =
When S is the total space S A , we simply denote S S ( φ ) by S ( φ ) .
This entropy (mixing S -entropy) of a general state φ satisfies the following properties.
Theorem 25 
When A = B ( H ) and α t = A d ( U t ) (i.e., α t ( A ) = U t * A U t for any A A ) with a unitary operator U t , for any state φ given by φ ( · ) = tr ρ · with a density operator ρ, the following facts hold:
(1)
S ( φ ) = tr ρ log ρ .
(2)
If φ is an α-invariant faithful state and every eigenvalue of ρ is non-degenerate, then S I ( α ) ( φ ) = S ( φ ) , where I α is the set of all α-invariant faithful states.
(3)
If φ K ( α ) , then S K α ( φ ) = 0 , where K α is the set of all KMS states.
Theorem 26 
For any φ K ( α ) , we have
(1) 
S K α ( φ ) S I ( α ) ( φ ) .
(2) 
S K α ( φ ) S ( φ ) .
This S (or mixing) entropy gives a measure of the uncertainty observed from the reference system S so that it has the following merits : Even if the total entropy S ( φ ) is infinite, S S ( φ ) is finite for some S , hence it explains a sort of symmetry breaking in S . Other similar properties as S ( ρ ) hold for S S ( φ ) . This entropy can be appllied to characterize normal states and quantum Markov chains in von Neumann algebras.
The relative entropy for two general states φ and ψ was introduced by Araki and Uhlmann and their relation is considered by Donald and Hiai et al.
<Araki’s relative entropy> [8,9]
Let N be σ-finite von Neumann algebra acting on a Hilbert space H and φ , ψ be normal states on N given by φ ( · ) = x , · x and ψ ( · ) = y , · y with x , y K (a positive natural cone) H . The operator S x , y is defined by
S x , y ( A y + z ) = s N ( y ) A * x , A N , s N ( y ) z = 0
on the domain N y + ( I s N ( y ) ) H , where s N ( y ) is the projection from H to { N y } , the N -support of y. Using this S x , y , the relative modular operator Δ x , y is defined as Δ x , y = S x , y * S x , y ¯ , whose spectral decomposition is denoted by 0 λ d e x , y ( λ ) ( S x , y ¯ is the closure of S x , y ). Then the Araki relative entropy is given by
Definition 27 
S ( ψ , φ ) = 0 log λ d y , e x , y λ y ψ φ otherwise
where ψ φ means that φ ( A * A ) = 0 implies ψ ( A * A ) = 0 for A N .
<Uhlmann’s relative entropy> [10]
Let L be a complex linear space and p , q be two seminorms on L . Moreover, let H ( L ) be the set of all positive hermitian forms α on L satisfying | α ( x , y ) | p ( x ) q ( y ) for all x , y L . Then the quadratical mean Q M ( p , q ) of p and q is defined by
Q M ( p , q ) ( x ) = sup { α ( x , x ) 1 / 2 ; α H ( L ) } , x L
There exists a family of seminorms p t ( x ) of t [ 0 , 1 ] for each x L satisfying the following conditions:
(1)
For any x L , p t ( x ) is continuous in t,
(2)
p 1 / 2 = Q M ( p , q ) ,
(3)
p t / 2 = Q M ( p , p t ) ,
(4)
p ( t + 1 ) / 2 = Q M ( p t , q ) .
This seminorm p t is denoted by Q I t ( p , q ) and is called the quadratical interpolation from p to q. It is shown that for any positive hermitian forms α , β , there exists a unique function Q F t ( α , β ) of t [ 0 , 1 ] with values in the set H ( L ) such that Q F t ( α , β ) ( x , x ) 1 / 2 is the quadratical interpolation from α ( x , x ) 1 / 2 to β ( x , x ) 1 / 2 . The relative entropy functional S ( α , β ) ( x ) of α and β is defined as
S ( α , β ) ( x ) = lim t 0 inf 1 t Q F t ( α , β ) ( x , x ) α ( x , x )
for x L . Let L be a *-algebra A and φ , ψ be positive linear functionals on A defining two hermitian forms φ L , ψ R such as φ L ( A , B ) = φ ( A * B ) and ψ R ( A , B ) = ψ ( B A * ) .
Definition 28 
The relative entropy of φ and ψ is defined by
S ( ψ , φ ) = S ( ψ R , φ L ) ( I )
<Ohya’s mutual entropy> [16]
Next we discuss the mutual entropy in C*−systems. For any φ S S ( A ) and a channel Λ * : S ( A ) S ( A ¯ ) , define the compound states by
Φ μ S = S ω Λ * ω d μ
and
Φ 0 = φ Λ * φ
The first compound state generalizes the joint probability in classical systems and it exhibits the correlation between the initial state φ and the final state Λ * φ .
Definition 29 
The mutual entropy w.r.t. S and μ is
I μ S ( φ ; Λ ) = S ( Φ μ S , Φ 0 )
and the mutual entropy w.r.t. S is defined as
I S ( φ ; Λ * ) = lim ε 0 sup I μ S ( φ ; Λ * ) ; μ F φ ε ( S )
where
F φ ε ( S ) = μ D φ ( S ) ; S S ( φ ) H ( μ ) S S ( φ ) + ε < + M φ ( S ) if S S ( φ ) = +
The following fundamental inequality is satisfied for almost all physical cases.
0 I S ( φ ; Λ * ) S S ( φ )
The main properties of the relative entropy and the mutual entropy are shown in the following theorems.
Theorem 30 
(1) 
Positivity : S ( φ , ψ ) 0 and S ( φ , ψ ) = 0 , iff φ = ψ .
(2) 
Joint Convexity : S ( λ ψ 1 + ( 1 λ ) ψ 2 , λ φ 1 + ( 1 λ ) φ 2 ) λ S ( ψ 1 , φ 1 ) + ( 1 λ ) S ( ψ 2 , φ 2 ) for any λ [ 0 , 1 ] .
(3) 
Additivity : S ( ψ 1 ψ 2 , φ 1 φ 2 ) = S ( ψ 1 , φ 1 ) + S ( ψ 2 , φ 2 ) .
(4) 
Lower Semicontinuity : If lim n ψ n ψ = 0 and lim n φ n φ = 0 , then S ( ψ , φ ) lim n inf S ( ψ n , φ n ) . Moreover, if there exists a positive number λ satisfying ψ n λ φ n , then lim n S ( ψ n , φ n ) = S ( ψ , φ ) .
(5) 
Monotonicity : For a channel Λ * from S to S ¯ ,
S ( Λ * ψ , Λ * φ ) S ( ψ , φ )
(6) 
Lower Bound : ψ φ 2 / 4 S ( ψ , φ ) .
Remark 31 
This theorem is a generalization of the theorem 3.
<Connes-Narnhofer-Thirring Entropy>
Before closing this section, we mention the dynamical entropy introduced by Connes, Narnhofer and Thirring [50].
The CNT entropy H φ ( M ) of C*-subalgebra M A is defined by
H φ ( M ) sup φ = j μ j φ j j μ j S ( φ j M , φ M )
where the supremum is taken over all finite decompositions φ = j μ j φ j of φ and φ M is the restriction of φ to M . This entropy is the mutual entropy when a channel is the restriction to subalgebra and the decomposition is orthogonal. There are some relations between the mixing entropy S S ( φ ) and the CNT entropy [26].
Theorem 32. 
(1) 
For any state φ on a unital C*-algebra A ,
S ( φ ) = H φ ( A )
(2) 
Let ( A , G , α ) with a certain group G be a W*-dynamical system andf φ be a G-invariant normal state of A , then
S I ( α ) ( φ ) = H φ ( A α )
where A α is the fixed points algebra of A w.r.t. α.
(3) 
Let A be the C*-algebra C ( H ) of all compact operators on a Hilbert space H , and G be a group, α be a *-automorphic action of G-invariant density operator. Then
S I ( α ) ( ρ ) = H ρ ( A α )
(4) 
There exists a model such that
S I ( α ) ( φ ) > H φ ( A α ) = 0

10. Entropy Exchange and Coherent Information

First we define the entropy exchange [43,70,71,72,73]. If a quantum operation Λ * is represented as
Λ * ( ρ ) = i A i ρ A i * , i A i * A i 1
then the entropy exchange of the quantum operation Λ * with input state ρ is defined to be
S e ( ρ , Λ * ) = S ( W ) = tr ( W log W )
where the matrix W has elements
W i j = tr ( A i ρ A j * ) tr ( Λ * ( ρ ) )
Remark that if i A i * A i = 1 holds, then the quantum operation Λ * is a channel.
Definition 33 
[] The coherent information is defined by
I c ( ρ , Λ * ) = S Λ * ( ρ ) tr ( Λ * ( ρ ) ) S e ( ρ , Λ * )
Let ρ be a quantum state and Λ 1 * and Λ 2 * trace-preserving quantum operations. Then
S ( ρ ) I c ( ρ , Λ 1 * ) I c ( ρ , Λ 2 * Λ 1 * )
which has similar property of quantum mutual entropy.
Another entropy is defined by this coherent information with the von Neuman entropy S ( ρ ) such that
I C M ρ , Λ * S ρ + S Λ * ρ S e ρ , Λ *
We call this mutua type information the coherent mutual entropy here.
However these coherent information can not be considered as a candidate of the mutual entropy due to a theorem of the next section.

11. Comparison of various quantum mutual type entropies

There exist several different information a la mutual entropy. We compare these mutual type entropies [60,74].
Let x n be a CONS in the input Hilbert space H 1 , a quantum channel Λ * is given by
Λ * n A n A n
where A n x n x n is a one-dimensional projection satisfying
n A n = I
Then one has the following theorem:
Theorem 34 
When A j is a projection valued measure and dim(ran A j ) = 1 , for arbitary state ρ we have (1) I ρ , Λ * min S ρ , S Λ * ρ , (2) I C ρ , Λ * = 0 , (3) I C M ρ , Λ * = S ρ .
Proof. 
For any density operator ρ S H 1 and the channel Λ * given above, one has W i j = tr A i ρ A j = δ i j x i , ρ x j = x i , ρ x i = W i i so that one has
W = x 1 , ρ x 1 0 0 x N , ρ x N = n x n , ρ x n A n
Then the entropy exchange of ρ with respect to the quantum channel Λ * is
S e ρ , Λ * S W = n x n , ρ x n log x n , ρ x n
Since
Λ * ρ = n A n ρ A n = n x n , ρ x n A n = W
the coherent information of ρ with respect to the quantum channel Λ * is given by
I C ρ , Λ * S Λ * σ S e ρ , Λ * = S W S W = 0
for any ρ S H 1 . The Lindblad-Nielsen entropy is defined by
I L N ρ , Λ * S ρ + S Λ * σ S e ρ , Λ * = S ρ + I C ρ , Λ * = S ρ
for any ρ S H 1 . The quantum mutual entropy becomes
I ρ , Λ * = sup S Λ * ρ n μ n S Λ * E n
where the sup is taken over all Schatten decompositions ρ = m μ m E m , E m = y m y m , y n , y m = δ n m . so we obtain
I ρ , Λ * = S Λ * ρ m μ m S Λ * E m = S Λ * ρ m μ m k η τ k m min S ρ , S Λ * ρ
where η t t log t and τ k m x k , y m 2 . This means that I o ρ , Λ * takes various values depending on the input state ρ , for instance, ■
1 m μ m k η τ k m = 0 I ρ , Λ * = S Λ * ρ ( 2 ) m μ m k η τ k m > 0 I ρ , Λ * < S Λ * ρ
We further can prove that the coherent information vanishes for a general class of channels.
Theorem 35 
Let in the input Hilbert space be given a CONS x n and in the output Hilbert space a sequence of the density operators ρ n . Consider a channel Λ * given by
Λ * ( ρ ) = n x n ρ x n ρ n
where ρ is any state in the input Hilbert space. (One can check that it is a trace preserving CP map). Then the coherent information vanishes: I c ρ , Λ * = 0 for any state ρ .
Remark 36 
The channel of the form Λ * ( ρ ) = n x n ρ x n ρ n can be considered as the classical-quantum channel iff the classical probability distribution p n = x n ρ x n is a priori given.
For the attenuation channel Λ 0 * , one can obtain the following theorems [74,75]:
Theorem 37 
For any state ρ = n λ n n n and the attenuation channel Λ 0 * with α 2 = β 2 = 1 2 , one has
1. 
0 I ρ ; Λ 0 * min S ρ , S Λ 0 * ρ (Ohya mutual entropy),
2. 
I C ρ ; Λ 0 * = 0 (coherent entropy),
3. 
I L N ρ ; Λ 0 * = S ρ (Lindblad-Nielsen entropy).
Theorem 38 
For the attenuation channel Λ 0 * and the input state ρ = λ 0 0 + ( 1 λ ) θ θ , we have
1. 
0 I ρ ; Λ 0 * min S ρ , S Λ 0 * ρ (Ohya mutual entropy),
2. 
S ρ I C ρ ; Λ 0 * S ρ (coherent entropy),
3. 
0 I L N ρ ; Λ 0 * 2 S ρ (Lindblad-Nielsen entropy).
The above theorem shows that the coherent entropy I C ρ ; Λ 0 * takes a minus value for α 2 < β 2 and the Lindblad-Nielsen entropy I L N ρ ; Λ 0 * is grater than the von Neumann entropy of the input state ρ for α 2 > β 2 .
From these theorems, Ohya mutual entropy I ρ ; Λ * only satisfies the inequality held in classical systems, so that Ohya mutual entropy can be a most suitable candidate as quantum extension of the classical mutual entropy.

12. Quantum Capacity and Coding

We discuss the following topics in quantum information; (1) the channel capacity for quantum communication processes by applying the quantum mutual entropy, (2) formulations of quantum analogues of McMillan’s theorem.
As we discussed, it is important to check ability or efficiency of channel. It is the channel capacity which describes mathematically this ability. Here we discuss two types of the channel capacity, namely, the capacity of a quantum channel Γ * and that of a classical (classical-quantum-classical) channel Ξ ˜ * Γ * Ξ * .

12.1. Capacity of quantum channel

The capacity of a quantum channel is the ability of information transmission of the channel itself, so that it does not depend on how to code a message being treated as a classical object.
As was discussed in Introduction, main theme of quantum information is to study information carried by a quantum state and its change associated with a change of the quantum state due to an effect of a quantum channel describing a certain dynamics, in a generalized sense, of a quantum system. So the essential point of quantum communication through a quantum channel is the change of quantum states by the quantum channel, which should be first considered free from any coding of messages. The message is treated as classical object, so that the information transmission started from messages and their quantum codings is a semi-quantum and is discussed in the next subsection. This subsection treats the pure quantum case, in which the (pure) quantum capacity is discussed as a direct extension of the classical (Shannon’s) capacity.
Before starting mathematical discussion, we explain a bit more about what we mean "pure quantum" for transmission capacity. We have to start from any quantum state and a channel, then compute the supremum of the mutual entropy to define the "pure" quantum capacity. One often confuse in this point, for example, one starts from the coding of a message and compute the supremum of the mutual entropy and he says that the supremum is the capacity of a quantum channel, which is not purely quantum but a classical capacity through a quantum channel.
Even when his coding is a quantum coding and he sends the coded message to a receiver through a quantum channel, if he starts from a classical state, i.e., a probability distribution of messages, then his capacity is not the capacity of the quantum channel itself. In his case, usual Shannon’s theory is applied because he can easily compute the conditional distribution by a usual (classical) way. His supremum is the capacity of a classical-quantum-classical channel, and it is in the second category discussed in the next subsection.
The capacity of a quantum channel Γ * : S ( H ) S ( K ) is defined as follows: Let S 0 ( S ( H ) ) be the set of all states prepared for expression of information. Then the quantum capacity of the channel Γ * with respect to S 0 is defined by
C S 0 Γ * = sup { I ρ ; Γ * ; ρ S 0 }
Here I ρ ; Γ * is the mutual entropy given in Section 7 with Λ * = Γ * . When S 0 = S ( H ) , C S ( H ) Γ * is denoted by C Γ * for simplicity. The capacity C Γ * is the largest information possibly sent through the channel Γ * .
We have
Theorem 39 
0 C S 0 Γ * sup S ( ρ ) ; ρ S 0
Remark 40 
We also considered the pseudo-quantum capacity C p Γ * defined [76] with the pseudo-mutual entropy I p ρ ; Γ * where the supremum is taken over all finite decompositions instead of all orthogonal pure decompositions:
I p ρ ; Γ * = sup k λ k S Γ * ρ k , Γ * ρ ; ρ = k λ k ρ k , finite decomposition
However the pseudo-mutual entropy is not well-matched to the conditions explained, and it is difficult to be computed numerically. It is easy to see that
C S 0 Γ * C p S 0 Γ *
It is worthy of noting that in order to discuss the details of transmission process for a sequence of n messages we have to consider a channel on the n-tuple space and the average mutual entropy (transmission rate) per a message.

12.2. Capacity of classical-quantum-classical channel

The capacity of C-Q-C channel Λ * = Ξ ˜ * Γ * Ξ * is the capacity of the information transmission process starting from the coding of messages, therefore it can be considered as the capacity including a coding (and a decoding). The channel Ξ * sends a classical state to a quantum one, and the channel Ξ ˜ * does a quantum state to a classical one. Note that Ξ * and Ξ ˜ * can be considered as the dual maps of ξ : B H C n ( A ( φ 1 ( A ) , φ 2 ( A ) , , φ n ( A ) ) ) and ξ ˜ : C m B ( K ) ( c 1 , c 2 , , c m ) j c j A j , respectively.
The capacity of the C-Q-C channel Λ * = Ξ ˜ * Γ * Ξ * is
C P 0 Ξ ˜ * Γ * Ξ * = sup { I ρ ; Ξ ˜ * Γ * Ξ * ; ρ P 0 }
where P 0 ( P ( Ω ) ) is the set of all probability distributions prepared for input (a-priori) states (distributions or probability measures, so that classical states). Moreover the capacity for coding free is found by taking the supremum of the mutual entropy over all probability distributions and all codings Ξ * :
C c P 0 Ξ ˜ * Γ * = sup { I ρ ; Ξ ˜ * Γ * Ξ * ; ρ P 0 , Ξ * }
The last capacity is for both coding and decoding free and it is given by
C c d P 0 Γ * = sup { I ρ ; Ξ ˜ * Γ * Ξ * ; ρ P 0 , Ξ * , Ξ ˜ * }
These capacities C c P 0 , C c d P 0 do not measure the ability of the quantum channel Γ * itself, but measure the ability of Γ * through the coding and decoding.
The above three capacities C P 0 , C c P 0 , C c d P 0 satisfy the following inequalities
0 C P 0 Ξ ˜ * Γ * Ξ * C c P 0 Ξ ˜ * Γ * C c d P 0 Γ * sup S ( ρ ) ; ρ P o
Here S ( ρ ) is the Shannon entropy: p k log p k . for the initial probability distribution p k of the message.

12.3. Bound of mutual entropy and capacity

Here we discuss the bound of mutual entropy and capacity. The discussion of this subsection is based on the papers [30,31,41,42,77].
To each input symbol x i there corresponds a state σ i of the quantum communication system, σ i functions as the codeword of x i . The coded state is a convex combination
ρ = i p i δ i σ = i p i Ξ * δ i = i p i σ i
whose coefficients are the corresponding probabilities, p i is the probability that the letter x i should be transmitted over the channel. To each output symbol y j there corresponds a non-negative observable, that is a selfadjoint operator Q j on the output Hilbert space K , such that j Q j = I ( Q j is called POVM). In terms of the quantum states the transition probabilities are tr Γ * σ i ( Q j ) and the probability that x i was sent and y j is read is
p j i = p i tr Γ * σ i ( Q j )
On the basis of these joint probability distribution the classical mutual information is given.
I c l = i , j p j i log p j i p i q j
where q j = tr Γ * σ ( Q j ) . The next theorem provides a fundamental bound for the mutual information in terms of the quantum von Neumann entropy, which was proved by Holevo [21] in 1973. Ohya introduced in 1983 the quantum mutual entropy by means of the relative entropy as discussed above.
Theorem 41 
With the above notation
I c l = i , j p j i log p j i p i q j S ( Γ * σ ) i p i S ( Γ * σ i )
holds.
Holevo’s upper bound can now be expressed by
S ( Γ * σ ) i p i S ( Γ * σ i ) = i p i S ( Γ * σ i , Γ * σ )
For general quantum case, we have the following inequality according to the theorem 9.
Theorem 42 
When the Schatten decomposition i.e., one dimensional spectral decomposition) ρ = i p i ρ i is unique,
I c l I ρ ; Γ * = i p i S ( Γ * ρ i , Γ * ρ )
for any channel Γ * .
Go back to general discussion, an input state ρ is the probability distribution λ k of messages and its Schatten decomposition is unique as ρ = k λ k δ k with delta measures δ k , so the mutual entropy is written by
I ρ ; Ξ ˜ * Γ * Ξ * = k λ k S Ξ ˜ * Γ * Ξ * δ k , Ξ ˜ * Γ * Ξ * ρ
If the coding Ξ * is a quantum coding, then Ξ * δ k is expressed by a quantum state. Let denote the coded quantum state by σ k = Ξ * δ k as above and put σ = Ξ * ρ = k λ k Ξ * δ k = k λ k σ k . Then the above mutual entropy in a c l a s s i c a l q u a n t u m c l a s s i c a l channel Ξ ˜ * Γ * Ξ * is written as
I ρ ; Ξ ˜ * Γ * Ξ * = k λ k S Ξ ˜ * Γ * σ k , Ξ ˜ * Γ * σ
This is the expression of the mutual entropy of the whole information transmission process starting from a coding of classical messages.
Remark that if k λ k S ( Γ * σ k ) is finite, then (18) becomes
I ρ ; Ξ ˜ * Γ * Ξ * = S ( Ξ ˜ * Γ * σ ) k λ k S ( Ξ ˜ * Γ * σ k )
Further, if ρ is a probability measure having a density function f ( λ ) ; that is, ρ A = A f ( λ ) d λ , where A is an interval in R , and each λ corresponds to a quantum coded state σ ( λ ) , then
σ = f ( λ ) σ ( λ ) d λ
and
I ρ ; Ξ ˜ * Γ * Ξ * = S ( Ξ ˜ * Γ * σ ) f ( λ ) S ( Ξ ˜ * Γ * σ ( λ ) ) d λ
One can prove that this is less than
S ( Γ * σ ) f ( λ ) S ( Γ * σ ( λ ) ) d λ
This upper bound is a special one of the following inequality
I ρ ; Ξ ˜ * Γ * Ξ * I ρ ; Γ * Ξ *
which comes from the monotonicity of the relative entropy and gives the proof of Theorem 42 above.
We can use the Bogoliubov inequality
S ( A , B ) tr A log tr A log tr B
where A and B are two positive Hermitian operators and use the monotonicity of the mutual entropy, one has the following bound of the mutual entropy I ρ ; Γ * Ξ * [41].
Theorem 43 
For a probability distribution ρ = λ k and a quantum coded state σ = Ξ * ρ k λ k σ k , λ k 0 , k λ k = 1 , one has the following inequality for any quantum channel decomposed as Γ * = Γ 1 * Γ 2 * such that Γ 1 * σ = i A i σ A i * , i A i * A i = I :
k λ k S ( σ k , σ ) k , i λ k tr ( A i Γ 2 * σ k A i * ) [ log tr ( A i Γ 2 * σ k A i * ) log tr A i Γ 2 * σ A i * ]
In the case that the channel Γ 2 * is identical, Γ 2 * σ k = σ k , the above inequality reduces to the bound of Theorem 41:
k λ k S ( σ k , σ ) k , i λ k tr ( B i σ k ) [ log tr ( B i σ k ) log tr ( B i σ ) ]
where B i = A i * A i .
Note that k λ k S ( σ k , σ ) and i , k λ k S ( A i Γ 2 * σ k A i * , A i Γ 2 * σ A i * ) are the quantum mutual entropy I ρ ; Γ * for special channels Γ * as discussed above and that the lower bound is equal to the classical mutual entropy, which depends on the POVM B i = A i * A i .
Using the above upper and lower bounds of the mutual entropy, we can compute these bounds of the capacity in many different cases.

13. Computation of Capacity

Shannon’s communication theory is largely of asymptotic character, the message length N is supposed to be very large. So we consider the N-fold tensor product of the input and output Hilbert spaces H and K ,
H N = i = 1 N H , K N = i = 1 N K
Note that
B ( H N ) = i = 1 N B ( H ) , B ( K N ) = i = 1 N B ( K )
A channel Λ N * : S ( H N ) S ( K N ) sends density operators acting on H N into those acting on K N . In particular, we take a memoryless channel which is the tensor product of the same single site channels: Λ N * = Λ * Λ * (N-fold). In this setting we compute the quantum capacity and the classical-quantum-classical capacity, denoted by C q and C c q below.
A pseudo-quantum code (of order N) is a probability distribution on S ( H N ) with finite support in the set of product states. So { ( p i ) , ( φ i ) } is a pseudo-quantum code if ( p i ) is a probability vector and φ i are product states of B ( H N ) . This code is nothing but a quantum code for a classical input (so a classical-quantum channel) such that p = j p j δ j φ = j p j φ j , as discussed in the previous chapter. Each quantum state φ i is sent over the quantum mechanical media (e.g., optical fiber) and yields the output quantum states Λ N * φ i . The performance of coding and transmission is measured by the quantum mutual entropy
I ( φ ; Λ N * ) = I ( ( p i ) , ( φ i ) ; Λ N * ) = i p i S ( Λ N * φ i , Λ N * φ )
We regard φ as the quantum state of the n-component quantum system during the information transmission. Taking the supremum over certain classes of pseudo-quantum codes, we obtain various capacities of the channel. The supremum is over product states when we consider memoryless channels, so the capacity is
C cq ( Λ N * ) = sup { I ( ( p i ) , ( φ i ) ; Λ N * ) ; ( ( p i ) , ( φ i ) ) is a pseudo-quantum code }
Next we consider a subclass of pseudo-quantum codes. A quantum code is defined by the additional requirement that { φ i } is a set of pairwise orthogonal pure states. This code is pure quantum, namely, we start a quantum state φ and take orthogonal extremal decompositions φ = i p i φ i , whose decomposition is not unique. Here the coding is how to take such an orthogonal extremal decomposition. The quantum mutual entropy is
I ( φ ; Λ N * ) = sup { i p i S ( Λ N * φ i , Λ N * φ ) ; i p i φ i = φ }
where the supremum is over all orthogonal extremal decompositions i p i φ i = φ as defined in Section 7. Then we arrive at the capacity
C q ( Λ N * ) = sup { I ( φ ; Λ N * ) : φ } = sup { I ( ( p i ) , ( φ i ) ; Λ N * ) : ( ( p i ) , ( φ i ) ) is a quantum code }
It follows from the definition that
C q ( Λ N * ) C cq ( Λ N * )
holds for every channel.
Proposition 44 
For a memoryless channel the sequences C cq ( Λ N * ) and C q ( Λ N * ) are subadditive.
Therefore the following limits exist and they coincide with the infimum.
C cq ˜ = lim N 1 N C cq Λ N * , C q ˜ = lim N 1 N C q Λ N *
(For multiple channels with some memory effect, one may take the limsup in (20) to get a good concept of capacity per single use.)
Example 45 
Let Λ * be a channel on the 2 × 2 density matrices such that
Λ * : a b b ¯ c a 0 0 c
Consider the input density matrix
ρ λ = 1 2 1 1 2 λ 1 2 λ 1 , 0 < λ < 1
For λ 1 / 2 the orthogonal extremal decomposition is unique, in fact
ρ λ = λ 2 1 1 1 1 + 1 λ 2 1 1 1 1
and we have
I ( ρ λ , Λ * ) = 0 for λ 1 / 2
However, I ( ρ 1 / 2 , Λ * ) = log 2 . Since C q ( Λ * ) C cq ( Λ * ) log 2 , we conclude that C q ( Λ * ) = C cq ( Λ * ) = log 2 .

13.1. Divergence center

In order to estimate the quantum mutual entropy , we introduce the concept of divergence center. Let { ω i : i I } be a family of states and a constant r > 0 .
Definition 46 
We say that the state ω is a divergence center for a family of states { ω i : i I } with radius r if
S ( ω i , ω ) r , for every i I
In the following discussion about the geometry of relative entropy (or divergence as it is called in information theory) the ideas of the divergence center can be recognized very well.
Lemma 47 
Let ( ( p i ) , ( φ i ) ) be a quantum code for the channel Λ * and ω a divergence center with radius r for { Λ * φ i } . Then
I ( ( p i ) , ( φ i ) ; Λ * ) r
Definition 48 
Let { ω i : i I } be a family of states. We say that the state ω is an exact divergence center with radius r if
r = inf φ sup i { S ( ω i , φ ) }
and ω is a minimizer for the right hand side.
When r is finite, then there exists a minimizer, because φ sup { S ( ω i , φ ) : i I } is lower semicontinuous with compact level sets: (cf. Proposition 5.27 in [17].)
Lemma 49 
Let ψ 0 , ψ 1 and ω be states of B ( K ) such that the Hilbert space K is finite dimensional and set ψ λ = ( 1 λ ) ψ 0 + λ ψ 1 ( 0 λ 1 ) . If S ( ψ 0 , ω ) , S ( ψ 1 , ω ) are finite and
S ( ψ λ , ω ) S ( ψ 1 , ω ) , ( 0 λ 1 )
then
S ( ψ 1 , ω ) + S ( ψ 0 , ψ 1 ) S ( ψ 0 , ω )
Lemma 50 
Let { ω i : i I } be a finite set of states of B ( K ) such that the Hilbert space K is finite dimensional. Then the exact divergence center is unique and it is in the convex hull on the states ω i .
Theorem 51 
Let Λ * : S ( H ) S ( K ) be a channel with finite dimensional K . Then the capacity C cq ( Λ * ) = C q ( Λ * ) is the divergence radius of the range of Λ * .

13.2. Comparison of capacities

Up to now our discussion has concerned the capacities of coding and transmission, which are bounds for the performance of quantum coding and quantum transmission. After a measurement is performed, the quantum channel becomes classical and Shannon’s theory is applied. The total capacity (or classical-quantum-classical capacity) of a quantum channel Λ * is
C cqc ( Λ * ) = sup { I ( ( p i ) , ( φ i ) ; Ξ ˜ * Λ * ) }
where the supremum is taken over both all pseudo-quantum codes ( p i ) , ( φ i ) and all measurements Ξ * ˜ . Due to the monotonicity of the mutual entropy
C cqc ( Λ * ) C cq ( Λ * )
and similarly
C cqc ˜ ( Λ N * ) lim sup 1 N C cqc ( Λ N * ) C cq ˜ ( Λ N * )
holds for the capacities per single use.
Example 52 
Any 2 × 2 density operators has the following standard representation
ρ x = 1 2 ( I + x 1 σ 1 + x 2 σ 2 + x 3 σ 3 )
where σ 1 , σ 2 , σ 3 are the Pauli matrices and x = ( x 1 , x 2 , x 3 ) R 3 with x 1 2 + x 2 2 + x 3 2 1 . For a positive semi-definite 3 × 3 matrix A the application Γ * : ρ x ρ A x gives a channel when A 1 . Let us compute the capacities of Γ * . Since a unitary conjugation does not change capacity obviously, we may assume that A is diagonal with eigenvalues 1 λ 1 λ 2 λ 3 0 . The range of Γ * is visualized as an ellipsoid with (Euclidean) diameter 2 λ 1 . It is not difficult to see that the tracial state τ is the exact divergence center of the segment connected the states ( I ± λ 1 σ 1 ) / 2 and hence τ must be the divergence center of the whole range. The divergence radius is
S 1 2 1 0 0 0 + λ 2 1 0 0 1 , τ = log 2 S 1 2 1 + λ 0 0 1 λ = log 2 η ( ( 1 + λ ) / 2 ) η ( ( 1 λ ) / 2 )
This gives the capacity C cq ( Γ * ) according to Theorem 51. The inequality (19) tells us that the capacity C q ( Γ * ) cannot exceed this value. On the other hand, I ( τ , Γ * ) = log 2 η ( ( 1 + λ ) / 2 ) η ( ( 1 λ ) / 2 ) and we have C cq ( Γ * ) = C q ( Γ * ) .
The relations among C cq , C q and C cqc form an important problem and are worthy of study. For a noiseless channel C cqc = log n was obtained in [78], where n is the dimension of the output Hilbert space (actually identical to the input one). Since the tracial state is the exact divergence center of all density matrix, we have C cq = log n and also C q = log n .
We expect that C cq < C cqc for "truely quantum mechanical channels" but C cqc ˜ = C cq ˜ = C q ˜ must hold for a large class of memoryless channels.
One can obtain the following results for the attenuation channel which is discussed in the previous chapter.
Lemma 53 
Let Λ * be the attenuation channel. Then
sup I ( ( p i ) , ( φ i ) ; Λ * ) = log n
when the supremum is over all pseudo-quantum codes ( ( p i ) i = 1 n , ( φ f ( i ) ) i = 1 n ) applying n coherent states.
The next theorem follows directly from the previous lemma.
Theorem 54 
The capacity C cq of the attenuation channel is infinite.
Since the argument of the proof of the above Lemma works for any quasi-free channel, we can conclude C pq = also in that more general case. Another remark concerns the classical capacity C cqc . Since the states φ f ( n ) used in the proof of Lemma commute in the limit λ , the classical capacity C cqc is infinite as well. C cqc = follows also from the proof of the next theorem.
Theorem 55 
The capacity C q of the attenuation channel is infinite.
Let us make some comments on the previous results. The theorems mean that arbitrarily large amount of information can go through the attenuation channel, however the theorems do not say anything about the price for it. The expectation value of the number of particles needed in the pseudo-quantum code of Lemma 53 tends to infinity. Indeed,
i 1 n φ f ( i ) ( N ) = 1 n i = 1 n f ( i ) 2 = λ ( n + 1 ) ( 2 n + 1 ) f 2 / 6
which increases rapidly with n. (Above N denoted the number operator.) Hence the good question is to ask the capacity of the attenuation channel when some energy constrain is posed:
C ( Λ * , E 0 ) = sup I ( ( p i ) , ( φ i ) ; Λ * ) ; i p i φ i ( N ) E 0
To be more precise, we have posed a bound on the average energy, different constrain is also possible. Since
Λ ( N ) = η N
for the dual operator Λ of the channel Λ * and the number operator N, we have
C ( Λ * , E 0 ) = sup i p i S ( φ i , j p j φ j ) ; S ( φ ) < i p i φ i ( N ) η E 0
The solution of this problem is the same as
S ( φ ) < sup { S ( ψ ) : ψ ( N ) = η E 0 }
and the well-known maximizer of this problem is a so-called Gibbs state. Therefore, we have
C ( Λ * , E 0 ) = a 2 E 0 + log ( a 2 E 0 + 1 )
This value can be realized as a classical capacity if the number states can be output states of the attenuation channel.

13.3. Numerical computation of quantum capacity

Let us consider the quantum capacity of the attenuation channel for the set of density operators consisted of two pure states with the energy constrain [31,79].
Let S 1 and S 2 be two subsets of S H 1 given by
S 1 = ρ 1 = λ θ θ + ( 1 λ ) θ θ ; λ 0 , 1 , θ C S 2 = ρ 2 = λ 0 0 + ( 1 λ ) θ θ ; λ 0 , 1 , θ C
The quantum capacities of the attenuation channel Λ * with respect to the above two subsets are computed under an energy constraint θ 2 E 0 for any E 0 0 :
C q S k ( Λ * , E 0 ) = sup I ( ρ k ; Λ * ) ; ρ k S k , θ 2 E 0
Since the Schatten decomposition is unique for the above two state subsets, by using the notations
η t = t log t t 0 , 1 , j = 0 , 1 , k = 1 , 2 S ( Λ * ρ k ) = i = 0 1 η 1 2 1 + 1 i 1 4 λ ( 1 λ ) 1 exp 4 2 k θ 2 ρ k = 1 2 1 + 1 4 λ ( 1 λ ) 1 exp 4 2 k θ 2 k = 1 , 2 S Λ * E j k = i = 0 1 η 1 2 1 + 1 i 1 4 μ j k ( 1 μ j k ) 1 ξ j k 2 μ j k = 1 2 1 + exp 2 3 2 k θ 2 exp 2 3 2 k η θ 2 τ j k 2 + 2 exp 2 3 2 k η θ 2 τ j k + 1 τ j k 2 + exp 2 3 2 k θ 2 τ j k + 1 ξ j k = τ j k 2 1 ( τ j k 2 + 1 ) 2 4 exp 4 2 k θ 2 τ j k 2 τ j k = ( 1 2 λ ) + 1 j 1 4 λ ( 1 λ ) ( 1 exp 4 2 k θ 2 ) ) 2 ( 1 λ ) exp 4 2 k θ 2
we obtain the following result [31].
Theorem 56 
(1) For ρ 1 = λ θ θ + ( 1 λ ) θ θ S 1 , the quantum mutual entropy I ρ 1 ; Λ * is calculated rigorously by
I ρ 1 ; Λ * = S Λ * ρ 1 ρ 1 S Λ * E 0 1 1 ρ 1 S Λ * E 1 1
(2) For ρ 2 = λ 0 0 + ( 1 λ ) θ θ S 2 , the quantum mutual entropy I ρ 2 ; Λ * is computed precisely by
I ρ 2 ; Λ * = S Λ * ρ 2 ρ 2 S Λ * E 0 2 1 ρ 2 S Λ * E 1 2
(3) For any E 0 0 , we have inequality of two quantum capacities:
C q S 2 ( Λ * , E 0 ) C q S 1 ( Λ * , E 0 )
Note that S 1 and S 2 represent the state subspaces generated by means of modulations of PSK (Phase-Shift-Keying) and OOK (On-Off-Keying) [75].

14. Quantum Dynamical Entropy

Classical dynamical entropy is an important tool to analyse the efficiency of information transmission in communication processes. Quantum dynamical entropy was first studied by Connes, Størmer [48] and Emch [49]. Since then, there have been many attempts to formulate or compute the dynamical entropy for some models [52,53,54,55,56,80]. Here we review four formulations due to (a) Connes, Narnhofer and Thirring (CNT) [50], (b) Muraki and Ohya (Complexity) [27,81], (c) Accardi, Ohya and Watanabe [58], (d) Alicki and Fannes (AF) [51]. We consider mutual relations among these formulations [80].
A dynamical entropy (Kossakowski, Ohya and Watanabe) [59] for not only a shift but also a completely positive (CP) map is defined by generalizing the entropy defined through quantum Markov chain and AF entropy defined by a finite operational partition.

14.1. Formulation by CNT

Let A be a unital C * -algebra, θ be an automorphism of A , and φ be a stationary state over A with respect to θ; φ θ = φ . Let B be a finite dimensional C * -subalgebra of A .
The CNT entropy [50] for a subalgebra B is given by
H φ B = sup k λ k S ω k | B , φ | B ; φ = k λ k ω k finite decomposition of φ
where φ | B is the restriction of the state φ to B and S ( · , · ) is the relative entropy for C * -algebra [7,8,10].
The CNT dynamical entropy with respect to θ and B is given by
H ˜ φ ( θ , B ) = lim sup N 1 N H φ ( B θ B θ N 1 B )
and the dynamical entropy for θ is defined by
H ˜ φ ( θ ) = sup B H ˜ φ ( θ , B )

14.2. Formulation by MO

We define three complexities as follows:
T S φ ; Λ * sup S S Λ * ω , Λ * φ d μ ; μ M φ S C T S φ T S φ ; i d I S φ ; Λ * sup S S ω Λ * ω d μ , φ Λ * φ ; μ M φ S C I S φ I S φ ; i d J S φ ; Λ * sup S S Λ * ω , Λ * φ d μ f ; μ f F φ S C J S φ J S φ ; i d
Based on the above complexities, we explain the quantum dynamical complexity (QDC) [14].
Let θ (resp. θ ¯ ) be a stationary automorphism of A (resp. A ¯ ); φ θ = φ , and Λ (the dual map of channel Λ * ) be a covariant CP map (i.e., Λ θ = θ ¯ Λ ) from A ¯ to A . B k (resp. B ¯ k ) is a finite subalgebra of A (resp. A ¯ ). Moreover, let α k (resp. α ¯ k ) be a CP unital map from B k (resp. B ¯ k ) to A (resp. A ¯ ) and α M and α ¯ Λ N are given by
α M = α 1 , α 2 , , α M α ¯ Λ N = Λ α ¯ 1 , Λ α ¯ 2 , , Λ α ¯ N
Two compound states for α M and α ¯ Λ N , with respect to μ M φ ( S ) , are defined as
Φ μ S α M = S m = 1 M α m * ω d μ Φ μ S α M α ¯ Λ N = S m = 1 M α m * ω n = 1 N α ¯ n * Λ * ω d μ
Using the above compound states, the three transmitted complexities [81] are defined by
T φ S α M , α ¯ Λ N sup S S m = 1 M α m * ω n = 1 N α ¯ n * Λ * ω , Φ μ S ( α M ) Φ μ S ( α ¯ Λ N ) d μ ; μ M φ ( S ) I φ S ( α M , α ¯ Λ N ) sup S Φ μ S ( α M α ¯ Λ N ) , Φ μ S ( α M ) Φ μ S ( α ¯ Λ N ) ; μ M φ ( S ) J φ S ( α M , α ¯ Λ N ) sup S S m = 1 M α m * ω n = 1 N α ¯ n * Λ * ω , Φ μ S ( α M ) Φ μ S ( α ¯ Λ N ) d μ f ; μ f F φ ( S )
When B k = B ¯ k = B , A = A ¯ , θ = θ ¯ , α k = θ k 1 α = α ¯ k , where α is a unital CP map from A 0 to A , the mean transmitted complexities are
T ˜ φ S θ , α , Λ * lim sup N 1 N T φ S α N , α ¯ Λ N T ˜ φ S θ , Λ * sup α T ˜ φ S θ , α , Λ *
and the same for I ˜ φ S and J ˜ φ S . These quantities have properties similar to those of the CNT entropy [27,81].

14.3. Formulation by AOW

A construction of dynamical entropy is due to the quantum Markov chain [58].
Let A be a von Neumann algebra acting on a Hilbert space H and let φ be a state on A and A 0 = M d ( d × d matrix algebra). Take the transition expectation E γ : A 0 A A of Accardi [36,82] such that
E γ ( A ˜ ) = i γ i A i i γ i
where A ˜ = i , j e i j A i j A 0 A and γ = { γ j } is a finite partition of unity I A . Quantum Markov chain is defined by ψ { φ , E γ , θ } S ( 1 A 0 ) such that
ψ ( j 1 ( A 1 ) j n ( A n ) ) φ E γ , θ ( A 1 E γ , θ ( A 2 A n 1 E γ , θ ( A n I ) ) )
where E γ , θ = θ E γ , θ Aut ( A ) , and j k is an embeding of A 0 into 1 A 0 such that j k ( A ) = I I A k t h I .
Suppose that for φ there exists a unique density operator ρ such that φ ( A ) = tr ρ A for any A A . Let us define a state ψ n on 1 n A 0 expressed as
ψ n ( A 1 A n ) = ψ ( j 1 ( A 1 ) j n ( A n ) )
The density operator ξ n for ψ n is given by
ξ n i 1 i n tr A ( θ n ( γ i n ) γ i 1 ρ γ i 1 θ n ( γ i n ) ) ) e i 1 i 1 e i n i n
Put
P i n i 1 = tr A ( θ n ( γ i n ) γ i 1 ρ γ i 1 θ n ( γ i n ) ) )
The dynamical entropy through QMC is defined by
S ˜ φ ( θ ; γ ) lim sup n 1 n tr ξ n log ξ n = lim sup n 1 n i 1 , , i n P i n i 1 log P i n i 1
If P i n i 1 satisfies the Markov property, then the above equality is written by
S ˜ φ ( θ ; γ ) = i 1 , i 2 P ( i 2 | i 1 ) P ( i 1 ) log P ( i 2 | i 1 )
The dynamical entropy through QMC with respect to θ and a von Neumann subalgebra B of A is given by
S ˜ φ ( θ ; B ) sup S ˜ φ ( θ ; γ ) ; γ B

14.4. Formulation by AF

Let A be a C * -algebra, θ be an automorphism on A and φ be a stationary state with respect to θ and B be a unital * -subalgebra of A . A set γ = { γ 1 , γ 2 , , γ k } of elements of B is called a finite operational partition of unity of size k if γ satisfies the following condition:
i = 1 k γ i * γ i = I
The operation ∘ is defined by
γ ξ { γ i ξ j ; i = 1 , 2 , , k , j = 1 , 2 , l }
for any partitions γ = { γ 1 , γ 2 , , γ k } and ξ = { ξ 1 , ξ 2 , , ξ l } . For any partition γ of size k, a k × k density matrix ρ [ γ ] = ( ρ [ γ ] i , j ) is given by
ρ [ γ ] i , j = φ ( γ j * γ i )
Then the dynamical entropy H ˜ φ ( θ , B , γ ) with respect to the partition γ and shift θ is defined by von Neumann entropy S ( · ) ;
H ˜ φ ( θ , B , γ ) = lim sup n 1 n S ( ρ [ θ n 1 ( γ ) θ ( γ ) γ ] )
The dynamical entropy H ˜ φ ( θ , B ) is given by taking the supremum over operational partition of unity in B as
H ˜ φ ( θ , B ) = sup H ˜ φ ( θ , B , γ ) ; γ B

14.5. Relation between CNT and MO

In this section we discuss relations among the above four formulations. The S -mixing entropy in GQS introduced in [16] is
S S φ = inf H μ ; μ M φ S
where H μ is given by
H μ = sup A k A ˜ μ A k log μ A k : A ˜ P S
and P ( S ) is the set of all finite partitions of S .
The following theorem [27,81] shows the relation between the formulation by CNT and that by complexity.
Theorem 57 
Under the above settings, we have the following relations:
(1) 
0 I S φ ; Λ * T S φ ; Λ * J S φ ; Λ * ,
(2) 
C I S ( φ ) = C T S φ = C J S φ = S S φ = H φ A ,
(3) 
A = A ¯ = B ( H ) , for any density operator ρ, and
0 I S ρ ; Λ * = T S ρ ; Λ * J S ρ ; Λ *
Since there exists a model showing that S I ( α ) ( φ ) H φ ( A α ) , S S ( φ ) distinguishes states more sharply than H φ ( A ) , where A α = { A A ; α ( A ) = A } .
Furthermore, we have the following results [83].
(1)
When A n , A are abelian C * -algebras and α k is an embedding map, then
T S ( μ ; α M ) = S μ classical ( m = 1 M A ˜ m ) I S ( μ ; α M , α ¯ N ) = I μ classical ( m = 1 M A ˜ m , n = 1 N B ˜ n )
are satisfied for any finite partitions A ˜ m , B ˜ n on the probability space (Ω= spec ( A ) , F , μ).
(2)
When Λ is the restriction of A to a subalgebra M of A ; Λ = | M ,
H φ ( M ) = J S ( φ ; | M ) = J φ S ( i d ; | M )
Moreover, when
N A 0 , A = N A 0 , θ Aut ( A ) α N ( α , θ α , ; θ N 1 α ) α = α ¯ ; A 0 A an embedding N N 1 N N
we have
H ˜ φ ( θ ; N ) = J ˜ φ S ( θ ; N ) = lim sup N 1 N J φ S ( α N ; | N N )
We show the relation between the formulation by complexity and that by QMC. Under the same settings in Section 3, we define a map E ( n , γ ) * from S ( H ) , the set of all density operators in H , to S ( ( 1 n C d ) H ) by
E ( n , γ ) * ( ρ ) = i 1 i n 1 i n e i 1 i 1 e i n 1 i n 1 e i n i n θ n 1 ( γ i n ) θ ( n 2 ) ( γ i n 1 ) γ i 1 ρ γ i 1 θ ( n 2 ) ( γ i n 1 ) θ n 1 ( γ i n )
for any density operator ρ S ( H ) . Let us take a map E ( n ) * from S ( ( 1 n C d ) H ) to S ( 1 n C d ) such that
E ( n ) * ( σ ) = tr H σ , σ S ( ( 1 n C d ) H )
Then a map Γ ( n , γ ) * from S ( H ) to S ( 1 n C d ) is given by
Γ ( n ) * ( ρ ) E ( n ) * E ( n , γ ) * ( ρ ) , ρ S ( H )
so that Γ ( n , γ ) * ( ρ ) = ξ n and
S ˜ φ ( θ ; γ ) = lim sup n 1 n S ( Γ ( n , γ ) * ( ρ ) )
From the above Theorem, we have C I S ( Γ ( n , γ ) * ( ρ ) ) = S ( Γ ( n , γ ) * ( ρ ) ) . Hence
S ˜ φ ( θ ; γ ) = C ˜ I S ( Γ ( γ ) * ( ρ ) ) ( lim sup n 1 n C I S ( Γ ( n , γ ) * ( ρ ) ) )

14.6. Formulation by KOW

Let B K (resp. B H ) be the set of all bounded linear operators on separable Hilbert space K (resp. H ) . We denote the set of all density operators on K (resp. H ) by S K (resp. S H ) . Let
Γ : B K B H B K B H
be a normal, unital CP linear map, that is, Γ satisfies
B ˜ α B ˜ Γ B ˜ α Γ B ˜
Γ I K I H = I K I H I H resp. I K is the unity in H resp. K
for any increasing net B ˜ α B K B H converging to B ˜ B K B H and
i , j = 1 n B ˜ j * Γ A ˜ j * A ˜ i B ˜ i 0
hold for any n N and any A ˜ i , B ˜ j B K B H . For a normal state ω on B K , there exists a density operator ω ˜ S K associated to ω (i.e., ω A = t r ω ˜ A , A B K ). Then a map
E Γ , ω : B K B H B H
defined as
E Γ , ω A ˜ = ω Γ A ˜ = t r K ω ˜ Γ A ˜ , A ˜ B K B H
is a transition expectation in the sense of [84] (i.e., E Γ , ω is a linear unital CP map from B K B H to B H ), whose dual is a map
E * Γ , ω ρ : S H S K H
given by
E * Γ , ω ρ = Γ * ω ˜ ρ
The dual map E * Γ , ω is a lifting in the sense of [84]; that is, it is a continuous map from S H to S K H .
For a normal, unital CP map Λ : B H B H , i d Λ : B K B H B K B H is a normal, unital CP map, where i d is the identity map on B K . Then one defines the transition expectation
E Λ Γ , ω A ˜ = ω i d Λ Γ A ˜ , A ˜ B K B H
and the lifting
E Λ * Γ , ω ρ = Γ * ω ˜ Λ * ρ , ρ S H
The above Λ * has been called a quantum channel [13] from S H to S H , in which ρ is regarded as an input signal state and ω ˜ is as a noise state.
The equality
t r 1 n K H Φ Λ , n * Γ , ω ρ A 1 A n B t r H ρ E Λ Γ , ω A 1 E Λ Γ , ω A 2 A n 1 E Λ Γ , ω A n B
for all A 1 , A 2 , , A n B K , B B H and any ρ S H defines
(1) 
a lifting
Φ Λ , n * Γ , ω : S H S 1 n K H
and
(2) 
marginal states
ρ Λ , n Γ , ω t r H Φ Λ , n * Γ , ω ρ S 1 n K
ρ ¯ Λ , n Γ , ω t r 1 n K Φ Λ , n * Γ , ω ρ S H
Here, the state
Φ Λ , n * Γ , ω ρ S 1 n K H
is a compound state for ρ ¯ Λ , n Γ , ω and ρ Λ , n Γ , ω in the sense of [13]. Note that generally ρ ¯ Λ , n Γ , ω is not equal to ρ .
Definition 58 
The quantum dynamical entropy with respect to Λ , ρ , Γ and ω is defined by
S ˜ Λ ; ρ , Γ , ω lim sup n 1 n S ρ Λ , n Γ , ω
where S · is von Neumann entropy [6]; that is, S σ t r σ log σ , σ S 1 n K . The dynamical entropy with respect to Λ and ρ is defined as
S ˜ Λ ; ρ sup S ˜ Λ ; ρ , Γ , ω ; Γ , ω

14.7. Generalized AF entropy and generalized AOW entropy

In this section, we generalize both the AOW entropy and the AF entropy. Then we compare the generalized AF entropy with the generalized AOW entropy.
Let θ be an automorphism of B H , ρ be a density operator on H and E θ u be the transition expectation on B K B H with Λ = θ .
One introduces a transition expectation E θ u from B K B H to B H such as
E θ u i , j E i j A i j k , m , p , q θ u p q k * A k m u p q m = k , m , p , q θ u p q k * θ A k m θ u p q m
The quantum Markov state ρ θ , n u on 1 n B K is defined through this transition expectation E θ u by
t r 1 n K ρ θ , n u A 1 A n t r H ρ E θ u A 1 E θ u A 2 A n 1 E θ u A n I
for all A 1 , , A n B K and any ρ S H .
Let consider another transition expectation e θ m u such that
e θ m u i , j E i j A i j k , l , p , q θ m u p q k * A k l θ m u p q l
One can define the quantum Markov state ρ ˜ θ , n u in terms of e θ u
t r 1 n K ρ ˜ θ , n u A 1 A n t r H ρ e θ u A 1 e θ 2 u A 2 A n 1 e θ n u A n I
for all A 1 , , A n B K and any ρ S H . Then we have the following theorem.
Theorem 59 
ρ θ , n u = ρ ˜ θ , n u
Let B 0 be a subalgebra of B K . Taking the restriction of a transition expectation E : B K B H B H , to B 0 B H , i.e., E 0 = E B 0 B H , E 0 is the transition expectation from B 0 B H to B H . The QMC (quantum Markov chain) defines the state ρ θ , n u 0 on 1 n B 0 through Equation (40) ), which is
ρ θ , n u 0 = ρ θ , n u 1 n B 0
The subalgebra B 0 of B K can be constructed as follows: Let P 1 , P m be projection operators on mutually orthogonal subspaces of K such that i = 1 m P i = I K . Putting K i = P i K , the subalgebra B 0 is generated by
i = 1 m P i A P i , A B K
One observes that in the case of n = 1
ρ θ , 1 u 0 ρ θ , 1 u B 0 = i = 1 m P i ρ θ , 1 u P i
and one has for any n N
ρ θ , n u 0 = i 1 , , i n P i 1 P i n ρ θ , n u P i 1 P i n
from which the following theorem is proved (c.f., see [17]).
Theorem 60 
S ρ θ , n u S ρ θ , n u 0
Taking into account the construction of subalgebra B 0 of B K , one can construct a transition expectation in the case that B K is a finite subalgebra of B H .
Let B K be the d × d matrix algebra M d ( d dim H ) in B H and E i j = e i e j with normalized vectors e i H i = 1 , 2 , , d . Let γ 1 , , γ d B H be a finite operational partition of unity, that is, i = 1 d γ i * γ i = I , then a transition expectation
E γ : M d B H B H
is defined by
E γ i , j = 1 d E i j A i j i , j = 1 d γ i * A i j γ j
Remark that the above type complete positive map E γ is also discussed in [85].
Let M d 0 be a subalgebra of M d consisting of diagonal elements of M d . Since an element of M d 0 has the form i = 1 d b i E i i b i C , one can see that the restriction E γ 0 of E γ to M d 0 is defined as
E γ 0 i , j = 1 d E i j A i j i = 1 d γ i * A i i γ i
When Λ : B H B H is a normal unital CP map, the transition expectations E Λ γ and E Λ γ 0 are defined by
E Λ γ i , j = 1 d E i j A i j i , j = 1 d Λ γ i * A i j γ j E Λ γ 0 i , j = 1 d E i j A i j i = 1 d Λ γ i * A i i γ i
Then one obtains the quantum Markov states ρ Λ , n γ and ρ Λ , n γ 0
ρ Λ , n γ = i 1 , , i n = 1 d j 1 , , j n = 1 d t r H ρ Λ W j 1 i 1 Λ W j 2 i 2 Λ W j n i n I H E i 1 j 1 E i n j n
and
ρ Λ , n γ 0 = i 1 , , i n = 1 d t r H ρ Λ W i 1 i 1 Λ W i 2 i 2 Λ W i n i n I H E i 1 i 1 E i n i n = i 1 , , i n = 1 d p i 1 , , i n E i 1 i 1 E i n i n
where we put
W i j A γ i * A γ j , A B H
W i j * ρ γ j ρ γ i * , ρ S H
p i 1 , , i n t r H ρ Λ W i 1 i 1 Λ W i 2 i 2 Λ W i n i n I H
= t r H W i n i n * Λ * Λ * W i 2 i 2 * Λ * W i 1 i 1 * Λ * ρ
The above ρ Λ , n γ , ρ Λ , n γ 0 become the special cases of ρ Λ , n Γ , ω defined by taking Γ and ω in Equation (33). Therefore the dynamical entropy Equation (36) becomes
S ˜ Λ ; ρ , γ i lim sup n 1 n S ρ Λ , n γ
S ˜ 0 Λ ; ρ , γ i lim sup n 1 n S ρ Λ , n γ 0
The dynamical entropies of Λ with respect to a finite dimensional subalgebra B B H and the transition expectations E Λ γ and E Λ γ 0 are given by
S ˜ B Λ ; ρ sup S ˜ Λ ; ρ , γ i , γ i B
S ˜ B 0 Λ ; ρ sup S ˜ 0 Λ ; ρ , γ i , γ i B
We call (56) and (57) a generalized AF entropy and a generalized dynamical entropy by QMC, respectively. When γ i is PVM (projection valued measure) and Λ is an automorphism θ, S ˜ B 0 θ ; ρ is equal to the AOWdynamical entropy by QMC [58]. When γ i * γ i is POV (positive operater valued measure) and Λ = θ , S ˜ B θ ; ρ is equal to the AF entropy [51].
From theorem 60, one obtains an inequality
Theorem 61 
S ˜ B Λ ; ρ S ˜ B 0 Λ ; ρ
That is, the generalized dynamical entropy by QMC is greater than the generalized AF entropy. Moreover the dynamical entropy S ˜ B Λ ; ρ is rather difficult to compute because there exist off-diagonal parts in (48). One can easily compute the dynamical entropy S ˜ B 0 Λ ; ρ .
Here, we note that the dynamical entropy defined in terms of ρ θ , n u on 1 n B K is related to that of flows by Emch [49], which was defined in terms of the conditional expectation, provided B K is a subalgebra of B H .

15. Conclusion

As is mentioned above, we reviewed the mathematical aspects of quantum entropy and discussed several applications to quantum communication and statistical physics. All of them were studied by the present authors. Other topics for quantum information are recently developed in various directions, such as quantum algorithm, quantum teleportation, quantum cryptography, etc., which are discussed in [60].

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  2. Kullback, S.; Leibler, R. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  3. Gelfand, I.M.; Yaglom, A.M. Calculation of the amount of information about a random function contained in another such function. Amer. Math. Soc. Transl. 1959, 12, 199–246. [Google Scholar]
  4. Kolmogorov, A.N. Theory of transmission of information. Amer. Math. Soc. Transl. 1963, 33, 291–321. [Google Scholar]
  5. Ohya, M. Quantum ergodic channels in operator algebras. J. Math. Anal. Appl. 1981, 84, 318–327. [Google Scholar] [CrossRef]
  6. Von Neumann, J. Die Mathematischen Grundlagen der Quantenmechanik; Springer: Berlin, Germany, 1932. [Google Scholar]
  7. Umegaki, H. Conditional expectations in an operator algebra IV(entropy and information). Kodai Math. Sem. Rep. 1962, 14, 59–85. [Google Scholar] [CrossRef]
  8. Araki, H. Relative entropy of states of von Neumann Algebras. Publ.RIMS, Kyoto Univ. 1976, 11, 809–833. [Google Scholar] [CrossRef]
  9. Araki, H. Relative entropy for states of von Neumann algebras II. Publ.RIMS, Kyoto Univ. 1977, 13, 173–192. [Google Scholar] [CrossRef]
  10. Uhlmann, A. Relative entropy and the Wigner-Yanase-Dyson-Lieb concavity in interpolation theory. Commun. Math. Phys. 1977, 54, 21–32. [Google Scholar] [CrossRef]
  11. Donald, M.J. On the relative entropy. Commun. Math. Phys. 1985, 105, 13–34. [Google Scholar] [CrossRef]
  12. Urbanik, K. Joint probability distribution of observables in quantum mechanics. Stud. Math. 1961, 21, 317. [Google Scholar]
  13. Ohya, M. On compound state and mutual information in quantum information theory. IEEE Trans. Infor. Theo. 1983, 29, 770–777. [Google Scholar] [CrossRef]
  14. Ohya, M. Note on quantum probability. L. Nuo. Cimen. 1983, 38, 402–404. [Google Scholar] [CrossRef]
  15. Schatten, R. Norm Ideals of Completely Continuous Operators; Springer Verlag: Berlin, Germany, 1970. [Google Scholar]
  16. Ohya, M. Some aspects of quantum information theory and their applications to irreversible processes. Rep. Math. Phys. 1989, 27, 19–47. [Google Scholar] [CrossRef]
  17. Ohya, M.; Petz, D. Quantum Entropy and its Use; Springer Verlag: Berlin, Germany, 1993. [Google Scholar]
  18. Hiai, F.; Ohya, M.; Tsukada, M. Sufficiency, KMS condition and relative entropy in von Neumann algebras. Pacif. J. Math. 1981, 96, 99–109. [Google Scholar] [CrossRef]
  19. Hiai, F.; Ohya, M.; Tsukada, M. Sufficiency and relative entropy in *-algebras with applications to quantum systems. Pacif. J. Math. 1983, 107, 117–140. [Google Scholar] [CrossRef]
  20. Petz, D. Sufficient subalgebras and the relative entropy of states of a von Neumann algebra. Commun. Math. Phys. 1986, 105, 123–131. [Google Scholar] [CrossRef]
  21. Holevo, A.S. Some estimates for the amount of information transmittable by a quantum communication channel (in Russian). Prob. Pered. Infor. 1973, 9, 3–11. [Google Scholar]
  22. Ingarden, R.S. Quantum information theory. Rep. Math. Phys. 1976, 10, 43–73. [Google Scholar] [CrossRef]
  23. Ohya, M. Entropy Transmission in C*-dynamical systems. J. Math. Anal. Appl. 1984, 100, 222–235. [Google Scholar] [CrossRef]
  24. Accardi, L.; Ohya, M.; Suyari, H. Computation of mutual entropy in quantum Markov chains. Open Sys. Infor. Dyn. 1994, 2, 337–354. [Google Scholar] [CrossRef]
  25. Akashi, S. Superposition representability problems of quantum information channels. Open Sys. Infor. Dyn. 1997, 4, 45–52. [Google Scholar] [CrossRef]
  26. Muraki, N.; Ohya, M.; Petz, D. Entropies of general quantum systems. Open Sys. Infor. Dyn. 1992, 1, 43–56. [Google Scholar] [CrossRef]
  27. Muraki, N.; Ohya, M. Entropy functionals of Kolmogorov—Sinai type and their limit theorems. Lett. in Math. Phys. 1996, 36, 327–335. [Google Scholar] [CrossRef]
  28. Ohya, M.; Watanabe, N. Construction and analysis of a mathematical model in quantum communication processes. Scripta Thechnica, Inc., Elect. Commun. Japan 1985, 68, 29–34. [Google Scholar] [CrossRef]
  29. Ohya, M. State change and entropies in quantum dynamical systems. Springer Lect. Not. in Math. 1985, 1136, 397–408. [Google Scholar]
  30. Ohya, M.; Petz, D.; Watanabe, N. On capacities of quantum channels. Prob. and Math. Stat. 1997, 17, 179–196. [Google Scholar]
  31. Ohya, M.; Petz, D.; Watanabe, N. Numerical computation of quantum capacity. Inter. J. Theor. Phys. 1998, 38, 507–510. [Google Scholar] [CrossRef]
  32. Ohya, M.; Watanabe, N. A mathematical study of information transmission in quantum communication processes. Quant. Commun. Measur. 1995, 2, 371–378. [Google Scholar]
  33. Ohya, M.; Watanabe, N. A new treatment of communication processes with Gaussian Channels. Japan J. Appl. Math. 1986, 3, 197–206. [Google Scholar] [CrossRef]
  34. Fichtner, K.-H.; Freudenberg, W. Point processes and the position, distribution of infinite boson systems. J. Stat. Phys. 1987, 47, 959. [Google Scholar] [CrossRef]
  35. Fichtner, K.-H.; Freudenberg, W. Characterization of states of infinite Boson systems I. On the construction of states. Commun. Math. Phys. 1991, 137, 315–357. [Google Scholar] [CrossRef]
  36. Accardi, L.; Frigerio, A.; Lewis, J. Quantum stochastic processes. Publ. Res. Inst. Math. Sci. 1982, 18, 97–133. [Google Scholar] [CrossRef]
  37. Accardi, L.; Frigerio, A. Markov cocycles. Proc. R. Ir. Acad. 1983, 83A, 251–269. [Google Scholar]
  38. Milburn, G.J. Quantum optical Fredkin gate. Phys. Rev. Lett. 1989, 63, 2124–2127. [Google Scholar] [CrossRef] [PubMed]
  39. Yuen, H.P.; Ozawa, M. Ultimate information carrying limit of quantum systems. Phys. Rev. Lett. 1993, 70, 363–366. [Google Scholar] [CrossRef] [PubMed]
  40. Ohya, M. Fundamentals of quantum mutual entropy and capacity. Open Sys. Infor. Dyn. 1999, 6, 69–78. [Google Scholar] [CrossRef]
  41. Ohya, M.; Volovich, I.V. On quantum entropy and its bound. Infi. Dimen. Anal., Quant. Prob. and Rela. Topics 2003, 6, 301–310. [Google Scholar] [CrossRef]
  42. Holevo, A.S. The capacity of quantum channel with general signal states. IEEE Trans. Info. Theor. 1998, 44, 269–273. [Google Scholar] [CrossRef]
  43. Schumacher, B.W. Sending entanglement through noisy quantum channels. Phys. Rev. A 1996, 54, 2614. [Google Scholar] [CrossRef] [PubMed]
  44. Belavkin, V.P.; Ohya, M. Quantum entropy and information in discrete entangled states. Infi. Dimen. Anal., Quant. Prob. Rela. Topics 2001, 4, 137–160. [Google Scholar] [CrossRef]
  45. Belavkin, V.P.; Ohya, M. Quantum entanglements and entangled mutual entropy. Proc. Roy. Soc. Lond. A. 2002, 458, 209–231. [Google Scholar] [CrossRef]
  46. Ingarden, R.S.; Kossakowski, A.; Ohya, M. Information Dynamics and Open Systems; Kluwer: Dordrecht, the Netherlands, 1997. [Google Scholar]
  47. Kolmogorov, A.N. Dokl. Akad. Nauk SSSR 1958 and 1959, 119 and 124. 861 and 754.
  48. Connes, A.; Størmer, E. Entropy for automorphisms of II von Neumann algebras. Acta Math. 1975, 134, 289–306. [Google Scholar] [CrossRef]
  49. Emch, G.G. Positivity of the K-entropy on non-abelian K-flows. Z. Wahrscheinlichkeitstheory verw. Gebiete 1974, 29, 241–252. [Google Scholar] [CrossRef]
  50. Connes, A.; Narnhofer, H.; Thirring, W. Dynamical entropy of C*-algebras and von Neumann algebras. Commun.Math.Phys. 1987, 112, 691–719. [Google Scholar] [CrossRef]
  51. Alicki, R.; Fannes, M. Defining quantum dynamical entropy. Lett. Math. Phys. 1994, 32, 75–82. [Google Scholar] [CrossRef]
  52. Benatti, F. Deterministic Chaos in Infinite Quantum Systems, Series: Trieste Notes in Physics; Springer-Verlag: Berlin, Germany, 1993. [Google Scholar]
  53. Park, Y.M. Dynamical entropy of generalized quantum Markov chains. Lett. Math. Phys. 1994, 32, 63–74. [Google Scholar] [CrossRef]
  54. Hudetz, T. Topological entropy for appropriately approximated C*-algebras. J. Math. Phys. 1994, 35, 4303–4333. [Google Scholar] [CrossRef]
  55. Voiculescu, D. Dynamical approximation entropies and topological entropy in operator algebras. Commun. Math. Phys. 1995, 170, 249–281. [Google Scholar] [CrossRef]
  56. Choda, M. Entropy for extensions of Bernoulli shifts. Ergod. Theo. Dyn. Sys. 1996, 16, 1197–1206. [Google Scholar] [CrossRef]
  57. Ohya, M. Information dynamics and its application to optical communication processes. Springer Lect. Not. Math. 1991, 378, 81. [Google Scholar]
  58. Accardi, L.; Ohya, M.; Watanabe, N. Dynamical entropy through quantum Markov chain. Open Sys. Infor. Dyn. 1997, 4, 71–87. [Google Scholar] [CrossRef]
  59. Kossakowski, A.; Ohya, M.; Watanabe, N. Quantum dynamical entropy for completely positive maps. Infi. Dimen. Anal., Quant. Prob. Rela. Topics 1999, 2, 267–282. [Google Scholar] [CrossRef]
  60. Ohya, M.; Volovich, I.V. Mathematical Foundation of Quantum Information and Computation. (in preparation)
  61. Lindblad, G. Entropy, information and quantum measurements. Commun. Math. Phys. 1973, 33, 111–119. [Google Scholar] [CrossRef]
  62. Lindblad, G. Completely positive maps and entropy inequalities. Commun. Math. Phys. 1975, 40, 147–151. [Google Scholar] [CrossRef]
  63. Cecchini, C.; Petz, D. State extensions and a radon-Nikodym theorem for conditional expectations on von Neumann algebras. Pacif. J. Math. 1989, 138, 9–24. [Google Scholar] [CrossRef]
  64. Fichtner, K.-H.; Ohya, M. Quantum teleportation with entangled states given by beam splittings. Commun. Math. Phys. 2001, 222, 229–247. [Google Scholar] [CrossRef]
  65. Fichtner, K.-H.; Ohya, M. Quantum teleportation and beam splitting. Commun. Math. Phys. 2002, 225, 67–89. [Google Scholar] [CrossRef]
  66. Bilingsley, L. Ergodic Theory and Information; Wiley: New York, NY, USA, 1965. [Google Scholar]
  67. Ohya, M.; Tsukada, M.; Umegaki, H. A formulation of noncommutative McMillan theorem. Proc. Japan Acad. 1987, 63, Ser.A, 50–53. [Google Scholar] [CrossRef]
  68. Frigerio, A. Stationary states of quantum dynamical semigroups. Commun. Math. Phys. 1978, 63, 269–276. [Google Scholar] [CrossRef]
  69. Wehrl, A. General properties of entropy. Rev. Mod. Phys. 1978, 50, 221–260. [Google Scholar] [CrossRef]
  70. Shor, P. The Quantum Channel Capacity and Coherent Information, Lecture Notes, MSRI Workshop on Quantum Computation. San Francisco, CA, USA, 21–23 October 2002, (unpublished).
  71. Barnum, H.; Nielsen, M.A.; Schumacher, B.W. Information transmission through a noisy quantum channel. Phys. Rev. A 1998, 57, 4153–4175. [Google Scholar] [CrossRef]
  72. Bennett, C.H.; Shor, P.W.; Smolin, J.A.; Thapliyalz, A.V. Entanglement-assisted capacity of a quantum channel and the reverse Shannon theorem. IEEE Trans. Info. Theory 2002, 48, 2637–2655. [Google Scholar] [CrossRef]
  73. Schumacher, B.W.; Nielsen, M.A. Quantum data processing and error correction. Phys. Rev. A 1996, 54, 2629. [Google Scholar] [CrossRef] [PubMed]
  74. Ohya, M.; Watanabe, N. Comparison of mutual entropy-type measures. TUS preprint. 2003. [Google Scholar]
  75. Watanabe, N. Efficiency of optical modulations with coherent states. Springer Lect. Note. Phys. 1991, 378, 350–360. [Google Scholar]
  76. Ohya, M. Complexities and their applications to characterization of chaos. Inter. J. Theor. Phys. 1998, 37, 495–505. [Google Scholar] [CrossRef]
  77. Ohya, M.; Petz, D. Notes on quantum entropy. Stud. Scien. Math. Hungar. 1996, 31, 423–430. [Google Scholar]
  78. Fujiwara, A.; Nagaoka, H. Capacity of memoryless quantum communication channels. Math. Eng. Tech. Rep., Univ. Tokyo 1994, 94, 22. [Google Scholar]
  79. Ohya, M.; Watanabe, N. Quantum capacity of noisy quantum channel. Quant. Commun. Measur. 1997, 3, 213–220. [Google Scholar]
  80. Accardi, L.; Ohya, M.; Watanabe, N. Note on quantum dynamical entropies. Rep. Math. Phys. 1996, 38, 457–469. [Google Scholar] [CrossRef] [Green Version]
  81. Ohya, M. State change, complexity and fractal in quantum systems. Quant. Commun. Measur. 1995, 2, 309–320. [Google Scholar]
  82. Accardi, L. Noncommutative Markov chains. Inter. Sch. Math. Phys., Camerino 1974, 268. [Google Scholar]
  83. Ohya, M.; Heyde, C.C. Foundation of entropy, complexity and fractal in quantum systems. In Probability towards 2000. Lecture Notes in Statistics; Springer-Verlag: New York, NY, USA, 1998; pp. 263–286. [Google Scholar]
  84. Accardi, L.; Ohya, M. Compound channels, transition expectations, and liftings. Appl. Math. Optim. 1999, 39, 33–59. [Google Scholar] [CrossRef]
  85. Tuyls, P. Comparing quantum dynamical entropies. Banach Cent. Pub. 1998, 43, 411–420. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ohya, M.; Watanabe, N. Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics. Entropy 2010, 12, 1194-1245. https://doi.org/10.3390/e12051194

AMA Style

Ohya M, Watanabe N. Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics. Entropy. 2010; 12(5):1194-1245. https://doi.org/10.3390/e12051194

Chicago/Turabian Style

Ohya, Masanori, and Noboru Watanabe. 2010. "Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics" Entropy 12, no. 5: 1194-1245. https://doi.org/10.3390/e12051194

APA Style

Ohya, M., & Watanabe, N. (2010). Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics. Entropy, 12(5), 1194-1245. https://doi.org/10.3390/e12051194

Article Metrics

Back to TopTop