Next Article in Journal
Kolmogorov One-Way Functions Revisited
Next Article in Special Issue
New Family of Stream Ciphers as Physically Clone-Resistant VLSI-Structures
Previous Article in Journal
An Overview of DRAM-Based Security Primitives
Previous Article in Special Issue
Evaluating the Efficiency of Physical and Cryptographic Security Solutions for Quantum Immune IoT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Secure Authentication and Data Storage with Perfect Secrecy

Institute of Theoretical Information Technology, Technical University of München, 80333 München, Germany
*
Author to whom correspondence should be addressed.
Cryptography 2018, 2(2), 8; https://doi.org/10.3390/cryptography2020008
Submission received: 29 January 2018 / Revised: 23 March 2018 / Accepted: 6 April 2018 / Published: 10 April 2018
(This article belongs to the Special Issue Physical Security in a Cryptographic Enviroment)

Abstract

:
We consider an authentication process that makes use of biometric data or the output of a physical unclonable function (PUF), respectively, from an information theoretical point of view. We analyse different definitions of achievability for the authentication model. For the secrecy of the key generated for authentication, these definitions differ in their requirements. In the first work on PUF based authentication, weak secrecy has been used and the corresponding capacity regions have been characterized. The disadvantages of weak secrecy are well known. The ultimate performance criteria for the key are perfect secrecy together with uniform distribution of the key. We derive the corresponding capacity region. We show that, for perfect secrecy and uniform distribution of the key, we can achieve the same rates as for weak secrecy together with a weaker requirement on the distribution of the key. In the classical works on PUF based authentication, it is assumed that the source statistics are known perfectly. This requirement is rarely met in applications. That is why the model is generalized to a compound model, taking into account source uncertainty. We also derive the capacity region for the compound model requiring perfect secrecy. Additionally, we consider results for secure storage using a biometric or PUF source that follow directly from the results for authentication. We also generalize known results for this problem by weakening the assumption concerning the distribution of the data that shall be stored. This allows us to combine source compression and secure storage.

1. Introduction

The present work addresses two essential practical problems concerning secrecy in information systems. The first problem is authentication in order to manage access to the system. The second problem is secure storage in public databases. Both problems are of essential importance for further development of future communication systems. The goal of this work is to derive a fundamental characterization of the possible performance of such communication systems that meets very strict secrecy requirements. We show that these strict requirements can be met without loss in performance compared to known results with weaker secrecy requirements.
Information theoretic security has become a very active field of research in information theory in the past ten years, with a large number of promising approaches. For a current presentation, see [1]. In [2], the paper first introducing information theoretic security, the authors suggest requiring perfect secrecy [3] to guarantee security in communication. This means the data available to an attacker should be stochastically independent of the message that should be kept secret (the data and the message are modeled using random variables (RVs)). Thus, an attacker does not benefit from learning these data. In [4], this notion of security is weakened. The authors use weak secrecy [3] instead of perfect secrecy to guarantee secure communication. In many of the works on information theoretic security following [4], one considers weak secrecy or strong secrecy [3], which is yet another security requirement that is also weaker than perfect secrecy. As the name suggests, perfect secrecy is the desired ideal situation in cryptographic applications where an attacker does not get any information about the secret. Considering the roots of information theoretic security and its intuitive motivation, it suggests itself to require perfect secrecy for secure communications. Additionally, in [3], the recommendation is to not use weak secrecy as a secrecy measure. In [5], there is an example of a protocol that is obviously not secure, but meets the weak secrecy requirement.
The authors of the landmark paper [6] derive the capacity for secret key generation requiring perfect secrecy. A different model in information theoretic security has as an essential feature a biometric source or a PUF source. The outputs of biometric sources and the outputs of PUF sources both uniquely characterize a person [7], or a device, respectively [8]. This property qualifies them for being used for authentication as well as for secure storage. In [7,9], the authors consider a model for authentication using the output of a biometric source. They also consider a model that can be interpreted as a model for secure storage using a biometric source. Both of these models are very similar to the model for secret key generation and for both of the models the authors require weak secrecy to hold when defining achievability.
In [6,7,9], the authors assume that the statistics of the (PUF) source are perfectly known. A simple analysis of [6,7,9] shows that the protocols for authentication constructed there heavily depend on the knowledge of the source statistics. Particularly, it is possible that small variations of the source statistics influence the reliability and secrecy of the protocols for authentication or storage, respectively. The assumption that the source statistics are perfectly known is too optimistic in applications. That is why we are interested in considering the uncertainty of the source or PUF source. We assume that we do not know the statistics of the source, but that we know a set of source statistics that contains the actual source statistic. Thus, we consider a compound version of the source model. We want to develop robust protocols that work for all source statistics in a given set. The compound model also allows us to describe an attack scenario where the attacker is able to alter the source statistics. There are relatively few results concerning compound sources. The compound version of the source model from [6] is considered in [10].
One of our contributions in the present work is the generalization of the model for authentication from [7], by considering authentication using a compound PUF source (or equivalently a biometric source). Additionally, our work differs from the state of the art as we consider protocols for authentication that achieve perfect secrecy.
We also consider secure data storage making use of a PUF source (or equivalently a biometric source). The corresponding information theoretic model is very similar to the second model presented in [7], but, in contrast to [7], we define achievability requiring perfect secrecy and we consider source uncertainty of the PUF source. Our considerations concerning perfect secrecy in this work answer the question posed in the conclusion of [11].
Some of the results for secure authentication described in this work have already been published in [12]. Here, we additionally present the proofs that have been omitted in [12], i.e., the proofs of Theorem 4 and Theorem 5 and some more discussion. The results concerning secure storage have been presented in [13,14]. As these results heavily depend on [12], we briefly state them here (as well as the corresponding definitions).
In Section 2, we describe the authentication process and define the corresponding information theoretic model. We discuss different definitions of achievability for the model in Section 3. In this context, protocols that achieve perfect secrecy are of special interest. We develop the corresponding definition of achievability in this section. In Section 4, we prove capacity results for the model with respect to the various definitions of achievability. The main result in this section is Theorem 2. In Section 5, we generalize the model for authentication to the case with source uncertainty and define achievability for this model in Section 6. In Section 7, we derive the capacity region for the compound storage model. In Section 8, we consider some results for secure storage that follow from our results for authentication. The key result from authentication that we use for secure storage with perfect secrecy is Theorem 2. In Section 9, we further discuss our results.
For the most part, we use the notation introduced in [3].

2. Authentication Model

At first, we consider authentication using biometric or PUF data. This means we consider a scenario where a user enrolls in a system by giving a certain amount of biometric or PUF data to the system. Later, when the user wants to be authenticated, he again gives biometric or PUF data to the system. The system then decides if the user is accepted, i.e., if it is the same user that is enrolled in the system. In our considerations, we assume that the system can store some data in a public database.
Figure 1 depicts the authentication process as described in [7]. The process consists of two phases. In the first phase, the enrollment phase, the authentication system receives X n from the PUF source and the I D of a user. It generates a helper message M and a secret-key K from X n . It then uses a one-way function f on K and stores the result and M in a public database together with the user’s I D . The second phase is the authentication phase. In this phase, the system receives Y n from the PUF source and the I D of a user. It reads the corresponding helper message M and f ( K ) from the database. From M and Y n , it generates a secret-key K ^ . Then, the system compares f ( K ) and f ( K ^ ) . If they are equal, the user is accepted; otherwise, the user is rejected.
Now, we define an information theoretic model of the authentication process. We use random variables (RVs) to model the data. In the first chapters of this work, we assume that the distribution of the RVs is perfectly known. We drop this assumption in Section 5.
Definition 1.
Let n N . The authentication model consists of a discrete memoryless multiple source (DMMS) with generic variables X Y [3], the (possibly randomized) encoders [3] Φ : X n M , Θ : X n K and the deterministic decoder ψ : Y n × M K ^ . Let X n and Y n be the output of the DMMS. The RVs M and K are generated from X n using Φ and Θ. The RV K ^ is generated from Y n and M using ψ. We use the term authentication protocol for ( Φ , Θ , ψ ) .
Remark 1.
It is possible to define the authentication protocol in a more general way by permitting randomized decoders Ψ, but one can argue that in our definition of achievability a randomized Ψ does not improve the performance of the protocols ([3], Problem 17.11). For convenience, we use the less general definition.
Remark 2.
We model the PUF source as a DMMS. Due to physically induced distortions, we model the biometric/PUF data read in the two phases as jointly distributed RVs.
Remark 3.
The distribution of X Y is assumed to be known and can be used for the generation of the RVs. Thus, the encoders and the decoder are allowed to depend on the distribution.

3. Various Definitions of Achievability

For the authentication model, we define achievable secret-key rate versus privacy-leakage rate pairs. Intuitively, we want the probability that a legitimate user is rejected in the authentication phase to be small. Thus, Pr ( K = K ^ ) should be large to fulfill this reliability condition. Additionally, the probability that an attacker is accepted in the authentication phase should be as small as possible. Thus, we consider the maximum false acceptance probability (mFAP) [15], which is the probability that an attacker using the best possible attack strategy is accepted in the authentication phase averaged over all public messages m M . As we want the mFAP to be as small as possible, we are interested in the largest possible set of secret keys K . This reasoning is explained below. The system uses the output of a PUF source as input so it should leak as little information about X n as possible [7]. This motivates the following definition of achievable rate pairs.
Definition 2.
A tuple ( R , L ) , R , L 0 , is an achievable secret-key rate versus privacy-leakage rate pair for the authentication model if for every δ > 0 there is an n 0 = n 0 ( δ ) such that for all n n 0 there exists an authentication protocol such that
Pr ( K = K ^ ) 1 δ , mFAP 1 | K | , 1 n log | K | R δ , 1 n I ( M ; X n ) L + δ .
We denote the corresponding authentication protocols by FAP-Protocols (False-Acceptance- Probability-Protocols).
Remark 4.
In [15], a very similar definition of achievability is used. Instead of considering the relation between the mFAP and the set of secret-keys (1), the authors define the false-acceptance exponent that describes the exponential decrease of the mFAP in n. A rate pair ( R , L ) that is achievable using FAP-protocols is also achievable according to the definition in [15], R playing the role of the false-acceptance exponent.
We now clarify the bound on the mFAP in Inequality (1) and our interest in large secret-key rates. For this purpose, we consider the following observation.
Lemma 1.
For a communication protocol fulfilling the reliability condition, it holds that
mFAP 1 δ | K | .
Proof. 
Introduce the RV E, setting E = 1 for K K ^ and E = 0 , otherwise. Thus,
mFAP = m M P M ( m ) max y n Y n P K | M ( ψ ( y n , m ) | m ) m M P M ( m ) max y n Y n P K | M E ( ψ ( y n , m ) | m , 0 ) P E | M ( 0 | m ) = ( a ) m M P M E ( m , 0 ) max k K P K | M E ( k | m , 0 ) m M P M E ( m , 0 ) 1 | K | ( b ) ( 1 δ ) 1 | K | .
Here, (a) follows as P K | M E ( k | m , 0 ) = 0 if there is no y n Y n such that ψ ( y n , m ) = k and (b) follows from the δ -recoverability of K from K ^ . ☐
Thus, Lemma 1 shows that requiring Inequality (1) is in fact equivalent to requiring the mFAP to be as small as possible. It also justifies our interest in a large set K .
There is another way to define achievable secret-key rate versus privacy-leakage rate pairs for the authentication model. Here, we want to keep the key secret from the attacker. H ( K | M ) can be interpreted as the average information required to specify k when m is known ([16], Chapter 2). Thus, we want H ( K | M ) to be as large as possible instead of requiring a small mFAP. This means we require log | K | = H ( K | M ) . This condition is equivalent to the combination of the perfect secrecy condition I ( K ; M ) = 0 [5] and the uniform distribution of the key, i.e., H ( K ) = log | K | . Thus, we define achievability as follows.
Definition 3.
A tuple ( R , L ) , R , L 0 , is an achievable secret-key rate versus privacy-leakage rate pair for the authentication model if for every δ > 0 there is an n 0 = n 0 ( δ ) such that for all n n 0 there exists an authentication protocol such that
Pr ( K = K ^ ) 1 δ , ( 2 ) H ( K ) = log | K | , ( 3 ) I ( M ; K ) = 0 , 1 n log | K | R δ , 1 n I ( M ; X n ) L + δ .
We denote the corresponding authentication protocols by PSA-Protocols (Perfect-Secrecy-Authentication-Protocols).
Remark 5.
In [6], the authors derive the secret-key capacity for the source model. They define achievability requiring perfect secrecy and uniform distribution of the key. They do not consider the privacy-leakage in contrast to our definition of achievability.
It is interesting to compare the rate pairs achievable with respect to the restrictive Definition 3 with commonly used weaker requirements. In ([7], Definition 3.1), the authors give a different definition of achievable secret-key rate versus privacy-leakage rate pairs. Instead of Eqation (2), they require
H ( K ) log | K | δ
and instead of Equation (3) they require
1 n I ( M ; K ) δ ,
which is called the weak secrecy condition [5]. Thus, we get a third definition of achievability.
Definition 4
([7]). A tuple ( R , L ) , R , L 0 , is an achievable secret-key rate versus privacy-leakage rate pair for the authentication model if for every δ > 0 there is an n 0 = n 0 ( δ ) such that for all n n 0 there exists an authentication protocol such that
Pr ( K = K ^ ) 1 δ , H ( K ) log | K | δ , 1 n I ( M ; K ) δ , 1 n log | K | R δ , 1 n I ( M ; X n ) L + δ .
We denote the corresponding authentication protocols by WSA-Protocols (Weak-Secrecy-Authentication-Protocols).
Definition 5.
The set of achievable rate pairs that are achievable using PSA-Protocols is called the capacity region R P S A . The set of achievable rate pairs that are achievable using WSA-Protocols is called the capacity region R W S A and the set of achievable rate pairs that are achievable using FAP-Protocols is called the capacity region R F A P .
Now, we look at some straightforward relations between these capacity regions. We can directly see that Definition 3 is more restrictive than Definition 4 so a PSA-Protocol is also a WSA-Protocol and thus
R P S A R W S A .
We now show that a PSA-Protocol is also a FAP-Protocol.
Lemma 2.
It holds that
R P S A R F A P .
Proof. 
As Equations (2) and (3) imply, P K | M ( k | m ) = 1 | K | for all ( k , m ) K × M , we have
mFAP = m M P M ( m ) max y n Y n P K | M ( ψ ( y n , m ) | m ) m M P M ( m ) max k K P K | M ( k | m ) = 1 | K | .
 ☐

4. Capacity Regions for the Authentication Model

In ([7], Theorem 3.1), the authors derive the capacity region R W S A .
Theorem 1
([7]). It holds that
R W S A = U { ( R , L ) : 0 R I ( U ; Y ) , L I ( U ; X ) I ( U ; Y ) } .
The union is over all RVs U such that U X Y . We only have to consider RVs U with | U | | X | + 1 .
Remark 6.
The authors of [7] do not consider randomized encoders. In contrast, we permit randomization of the encoders in the enrollment phase. Using the strategy described in ([3], Problem 17.15), one can use the converse for deterministic encoders to prove the converse for randomized encoders with the same bounds on the secret-key rate and the privacy-leakage rate. Thus, the converse in [7] also holds true when randomization is permitted.
The following theorem is one of our main results.
Theorem 2.
It holds that
R P S A = R W S A .
Proof. 
We do not prove Theorem 2 here but prove a more general result in the remainder of the text. This result is Theorem 5. It is more general as it is concerned with a compound version of the authentication model. The authentication model is a special case of the compound authentication model where the compound set consists of a single DMMS. ☐
We now strengthen Lemma 2.
Theorem 3.
It holds that
R P S A = R F A P .
Proof. 
The achievability result is implied by Lemma 2. For the converse, we use a result of [15]. As discussed in Remark 4, a rate pair ( R , L ) , which is achievable according to Definition 2 is also achievable according to the definition of achievability used in [15], where R plays the role of the false acceptance exponent E. Thus, we use ([15], Theorem 4), which says that a rate pair ( E , L ) R W S A is not achievable. This implies our converse. ☐

5. Compound Authentication Model

We now consider authentication when the data source is not perfectly known. Figure 2 shows the corresponding authentication process. The only difference to the authentication process in Section 2 is the source uncertainty. As one can see in Figure 2, we even assume that an attacker can influence the source in the sense that the state of the source is altered, i.e., it generates another statistic. If the protocol for authentication is not robust, then authentication will not work.
We define the following information theoretic model for this authentication process with source uncertainty.
Definition 6.
Let n N . The compound authentication model consists of a set S of DMMSs with generic variables X s Y s , s S , (all on the same alphabets X and Y ), the (possibly randomized) encoders Φ : X n M , Θ : X n K and the (possibly randomized) decoder Ψ : Y n × M K ^ . Let X n and Y n be the output of one of the DMMSs in S , i.e., P X Y = P X s Y s for an s S , but s is not known. The RVs M and K are generated from X n using Φ and Θ. The RV K ^ is generated from Y n and M using Ψ. We use the term compound authentication protocol for ( Φ , Θ , Ψ ) .
Remark 7.
The uncertainty of the data source is modeled making use of a compound DMMS, that is, the DMMS modeling the PUF source is not known, but we know a set of DMMSs to which the actual DMMS belongs.
Remark 8.
S is assumed to be known and can be used for the generation of the RVs, that is, the encoder and the decoder can depend on these distributions.
Definition 7.
Given S , we define the set
I ( s ^ ) = { s S : y Y P X s Y s ( x , y ) = P X s ^ ( x ) x X }
for s ^ S . The sets I ( s ^ ) , s ^ S , form a partition of S , as they form the equivalence classes for the corresponding equivalence relation. We denote a set of representatives by S ^ .

6. Achievability for the Compound Model

For the compound authentication model, we define achievable secret-key rate versus privacy-leakage rate pairs.
Definition 8.
A tuple ( R , L ) , R , L 0 , is an achievable secret-key rate versus privacy-leakage rate pair for the compound authentication model if for every δ > 0 there is an n 0 = n 0 ( δ ) such that, for all n n 0 , there exists a compound authentication protocol such that, for all s S ,
( 5 ) Pr ( K = K ^ ) 1 δ , ( 6 ) H ( K ) = log | K | , ( 7 ) I ( M ; K ) = 0 , 1 n log | K | R δ , 1 n I ( M ; X n ) L + δ ,
where P X Y = P X s Y s . We denote the corresponding authentication protocols by PSCA-Protocols (Perfect-Secrecy-Compound-Authentication-Protocols).
Definition 9.
The set of achievable secret-key versus privacy-leakage rate pairs that are achievable using PSCA-Protocols is called the compound capacity region R P S C A ( S ) .

7. Capacity Regions for the Compound Authentication Model

We now derive the compound capacity region R P S C A ( S ) for the compound authentication model. We only consider compound sets S such that | S ^ | < . For the proof, we need the following theorem, which is a generalization of ([3], Theorem 6.10).
Theorem 4.
Given a (possibly infinite) set W of channels W : X Y , a set A X n with P n ( A ) > η , P P ( X ) , η > 0 and ϵ > 0 . Then, for every τ > 0 and all n large enough, there is a pair of mappings ( f , ϕ ) , f : M f X n , ϕ : Y n M f , such that ( f , ϕ ) is an ( n , ϵ ) -code for all W W with codewords in A and
1 n log | M f | inf W W I ( P ; W ) τ .
We call this pair of mappings a compound ( n , ϵ ) -code for W .
Even though the proof of Theorem 4 is very similar to the proof of ([3], Theorem 6.10), the proof of ([17], Theorem 4.3) and the proof of the results in [18], we prove Theorem 4 for the sake of completeness. The proof can be found in Appendix A.
Theorem 5.
It holds that
R P S C A ( S ) = s ^ S ^ U s ^ { ( R , L ) : 0 R inf s I ( s ^ ) I ( U s ^ ; Y s ) , L sup s I ( s ^ ) I ( U s ^ ; X s ) I ( U s ^ ; Y s ) } = ( a ) s ^ S ^ U s ^ R s ^ ( P S C A ) ( S , U s ^ ) ,
where, for ( a ) , we define R s ^ ( P S C A ) ( S , U s ^ ) appropriately. For all s ^ S ^ , the union is over all RVs U s ^ such that, for all s I ( s ^ ) , we have U s ^ X s Y s . For | S | < , we only have to consider RVs U s ^ with | U s ^ | | X | + | I ( s ^ ) | .
Proof. 
For all s ^ S ^ and all s I ( s ^ ) , let U s ^ , X s and Y s be RVs where X s Y s are the output of the DMMS in S with index s and X s and U s ^ are connected by the channel V s ^ : X U s ^ . Thus, we have the Markov chains U s ^ X s Y s for all s I ( s ^ ) . Let U = s ^ S ^ U s ^ . We now show that, given δ > 0 , for n large enough we can choose a set C U n that consists of | M | disjoint subsets C m with the following properties.
  • We consider a partition of the set of all sets C m in | S ^ | subsets. Thus, we denote the sets C m by C m , s ^ , s ^ S ^ , indicating to which subset they belong. We denote the set of indices m corresponding to s ^ by M s ^ . For each C m , s ^ , we have
    | C m , s ^ | = inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ ) .
  • Each C m , s ^ consists of sequences of the same type.
  • It holds that
    P U s ^ n ( C ) > 1 η
    for η > 0 and all s ^ S ^ .
  • For each s ^ S ^ , one can define pairs of mappings that are compound ( n , ϵ ) -codes, ϵ > 0 , for the channels W s : U Y , W s = P Y s | U s ^ for all s I ( s ^ ) in the following way. Define an (arbitrary) bijective mapping f m : { 1 | C m , s ^ | } C m , s ^ and an appropriate mapping ϕ m : Y n { 1 | C m , s ^ | } . Then, ( f m , ϕ m ) is such a code. This means
    W s n ( ϕ m 1 ( f m 1 ( u n ) ) | u n ) 1 ϵ
    for all s I ( s ^ ) and for all codewords u n in C m , s ^ . This is possible for all m M s ^ .
Let δ > 0 . We denote the elements of S ^ by s ^ 1 , s ^ 2 , , s ^ | S ^ | . We consider T P U s ^ 1 , ξ n , T P U s ^ 2 , ξ n , , T P U s ^ | S ^ | , ξ n , ξ > 0 , which are disjoint subsets of U n . We show that they are in fact disjoint subsets of U n for ξ small enough. This can be seen as follows. For s ^ i , s ^ j S ^ , s ^ i s ^ j , it holds that P U s ^ i ( u ) P U s ^ j ( u ) for at least one u U . Thus, there is a u U with
| P U s ^ i ( u ) P U s ^ j ( u ) | > α
for some α > 0 .
Now, assume that there is a u n T P U s ^ i , ξ n T P U s ^ j , ξ n . Denote the type of u n by p u n . Thus, there is a u U with
α < | P U s ^ i ( u ) P U s ^ j ( u ) | = | P U s ^ i ( u ) P u n ( u ) + P u n ( u ) P U s ^ j ( u ) | | P U s ^ i ( u ) P u n ( u ) | + | P U s ^ j ( u ) P u n ( u ) | 2 ξ ,
where the last inequality follows from the assumption that u n T P U s ^ i , ξ n T P U s ^ j , ξ n . Thus, for ξ < α / 2 , this is a contradiction and we know T P U s ^ i , ξ n and T P U s ^ j , ξ n are disjoint.
We start the construction of C by choosing a set A 1 , s ^ 1 T P U s ^ 1 , ξ n with P U s ^ 1 n ( A 1 , s ^ 1 ) η with η > η > 0 . According to Theorem 4, there is a compound ( n , ϵ ) -code for the channels W s , s I ( s ^ 1 ) with at least
inf s I ( s ^ 1 ) exp n ( I ( U s ^ 1 ; Y s ) δ )
codewords u n A 1 , s ^ 1 for n large enough. We denote the set of these codewords by C 1 , s ^ 1 . As there are less than ( n + 1 ) | U | types, we know that there is a set of at least
inf s I ( s ^ 1 ) exp n ( I ( U s ^ 1 ; Y s ) δ ) ( n + 1 ) | U |
codewords in C 1 , s ^ 1 with the same type. We only pick these codewords. There are at least
inf s I ( s ^ 1 ) exp n ( I ( U s ^ 1 ; Y s ) δ | U | log ( n + 1 ) n ) inf s I ( s ^ 1 ) exp n ( I ( U s ^ 1 ; Y s ) δ )
of them for n large enough. We now pick exactly
inf s I ( s ^ 1 ) exp n ( I ( U s ^ 1 ; Y s ) δ )
of these codewords and we denote this set by C 1 , s ^ 1 . Now, we choose a set A 2 , s ^ 1 T P U s ^ 1 , ξ n \ C 1 , s ^ 1 with P U n ( A 2 , s ^ 1 ) η . We construct the set C 2 , s ^ 1 in the same way as C 1 , s ^ 1 . Thus, C 2 , s ^ 1 is a set of
inf s I ( s ^ 1 ) exp n ( I ( U s ^ 1 ; Y s ) δ )
codewords of the same type corresponding to an ( n , ϵ ) -code. We continue this process until we can not find a set
A | M s ^ 1 | + 1 , s ^ 1 T P U s ^ 1 , ξ n \ i M s ^ 1 C i , s ^ 1
with
P U s ^ 1 n ( A | M s ^ 1 | + 1 , s ^ 1 ) η .
This means
P U s ^ 1 n ( ( i M s ^ 1 C i , s ^ 1 ) c T P U s ^ 1 , ξ n ) < η .
We repeat this process for all s ^ s ^ 1 , s ^ S ^ . Thus, we have for all s ^ S ^
P U s ^ n ( C ) P U s ^ n ( i M s ^ C i , s ^ ) = 1 P U s ^ n ( ( i M s ^ C i , s ^ ) c ) = 1 P U s ^ n ( ( i M s ^ C i , s ^ ) c T P U s ^ , ξ n ) P U s ^ n ( ( i M s ^ C i , s ^ ) c ( T P U s ^ , ξ n ) c ) 1 P U s ^ n ( ( i M s ^ C i , s ^ ) c T P U s ^ , ξ n ) P U s ^ n ( ( T P U s ^ , ξ n ) c ) .
Thus, we have Inequality (8) for n large enough.
We now can define the encoders/decoders Φ , Θ and Ψ .
  • We define Φ and Θ as follows. The system gets a sequence x n . It checks if x n T P X s ^ , ξ n , ξ > 0 , for an s ^ S ^ (We can choose ξ small enough and n large enough such that the T P X s ^ , ξ n are disjoint). If this is true for s ^ , the channel V s ^ is used n times to generate u n from x n . For Φ , the system looks in C for u n . If u n C the system chooses for m the index of the subset C m containing u n . If u n C it chooses an arbitrary m M . In addition, if x n s ^ S ^ T P X s ^ , ξ n , it chooses an arbitrary m M . For Θ , the system looks in C for u n . If u n C , it considers the compound ( n , ϵ ) -code corresponding to the subset C m , s ^ containing u n . If
    | C m , s ^ | > min s ^ S ^ inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ ) ,
    we consider the following deterministic mapping h m : f m 1 ( C m ) K { k ˜ } . Here,
    K = { 1 min s ^ S ^ inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ ) } .
    The preimage of any k K under h m is a subset of f m 1 ( C m ) of size
    | C m , s ^ | min s ^ S ^ inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ ) .
    The rest of the k f m 1 ( C m ) is mapped on k ˜ K . If
    | C m , s ^ | = min s ^ S ^ inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ ) ,
    the system chooses k = f m 1 ( u n ) . In this case, we also define h m : f m 1 ( C m ) K { k ˜ } where h m is injective. If u n C , k is chosen at random according to a uniform distribution on the alphabet. The same holds if u n is mapped on k ˜ or if x n s ^ S ^ T P X , s ^ , ξ n .
  • We define Ψ as follows. The system gets a sequence y n and m. It decodes y n using the code corresponding to C m , s ^ . Then, h m is used on the result. The result is k ^ if it differs from k ˜ . Otherwise, an arbitrary k ^ K is chosen.
Using the properties of the communication protocol, we analyse the achievability conditions. We denote the outputs of the DMMS by X n and Y n and the output of the channel used on X n by U n . Assume the index of the DMMS is s I ( s ^ ) , s ^ S . Thus, P X n Y n = P X s Y s n .
  • We define the following events:
    E 1 = { ( x n , y n , u n ) X n × Y n × U n : ( x n , u n ) T P X s U s ^ , ξ n } , E 2 = { ( x n , y n , u n ) X n × Y n × U n : u n C } , E 3 = m M { ( x n , y n , u n ) X n × Y n × U n : u n C m h m ( f m 1 ( u n ) ) = k ˜ } , E 4 = m M { ( x n , y n , u n ) X n × Y n × U n : u n C m f m 1 ( u n ) ϕ m ( y n ) } .
    According to ([3], Lemma 2.10), we can choose ξ small enough such that ( x n , u n ) T P X s U s ^ , ξ n implies x n T P X s , ξ n and u n T P U s ^ , ξ n . We have
    P X n Y n U n ( E 1 ) = 1 P X n Y n U n ( E 1 c ) = ( a ) 1 P X s Y s U s ^ n ( E 1 c ) = 1 P X s U s ^ n ( T P X s U s ^ , ξ n ) = P X s U s ^ n ( ( T P X s U s ^ , ξ n ) c ) .
    Here, ( a ) follows as for x n T P X s , ξ n the system uses V s ^ to generate u n from x n . Thus,
    Pr ( K K ^ ) P X n Y n U n ( E 1 E 2 E 3 E 4 ) = P X n Y n U n ( E 1 ) + P X n Y n U n ( ( E 2 E 3 E 4 ) E 1 c ) = P X s U s ^ n ( ( T P X s U s ^ , ξ n ) c ) + P X n Y n U n ( E 2 E 1 c ) + P X n Y n U n ( ( E 3 E 4 ) E 1 c ( E 2 c E 1 ) ) .
    Now, we use
    P X n Y n U n ( E 2 E 1 c ) ( x n , u n ) : x n T P X s , ξ n u n C c P X n U n ( x n , u n ) = ( x n , u n ) : x n T P X s , ξ n u n C c P X s U s ^ n ( x n , u n ) ( x n , u n ) : x n X n u n C c P X s U s ^ n ( x n , u n ) = P U s ^ n ( C c )
    and get
    Pr ( K K ^ ) P X s U s ^ n ( ( T P X s U s ^ , ξ n ) c ) + P U s ^ n ( C c ) + P X n Y n U n ( ( E 3 E 4 ) E 1 c E 2 c ) P X s U s ^ n ( ( T P X s U s ^ , ξ n ) c ) + P U s ^ n ( C c ) + P X n Y n U n ( E 4 E 1 c E 2 c ) + P X n Y n U n ( E 3 E 1 c E 2 c ) .
    Now, we define the RV E = e ( X n , U n ) with e : X n × U n { 0 , 1 }
    e ( x n , u n ) = 0 , for u n C x n s ^ S ^ T P X s ^ , ξ n , 1 , otherwise .
    We have
    Pr ( K K ^ ) P X s U s ^ n ( ( T P X s U s ^ , ξ n ) c ) + P U s ^ n ( C c ) + m M P M ( m ) P X n Y n U n | M ( E 4 E 1 c E 2 c | m ) + m M P M E ( m , 0 ) P X n Y n U n | M E ( E 3 E 1 c E 2 c | m , 0 )
    as for all m M
    P X n Y n U n | M E ( E 3 E 1 c E 2 c | m , 1 ) = 0 .
    As u n C and u n T P U s ^ , ξ n imply u n C m for an m M s ^ , we know
    P X n Y n U n | M ( E 4 E 1 c E 2 c | m ) = 0
    and
    P X n Y n U n | M ( E 3 E 1 c E 2 c | m ) = 0
    for m M s ^ . Thus, we have
    Pr ( K K ^ ) P X s U s ^ n ( ( T P X s U s ^ , ξ n ) c ) + P U s ^ n ( C c ) + m M s ^ P M ( m ) P X n Y n U n | M ( E 4 E 1 c E 2 c | m ) + m M s ^ P M E ( m , 0 ) P X n Y n U n | M E ( E 3 E 1 c E 2 c | m , 0 ) .
    We know for m M s ^
    P X n Y n U n | M ( E 4 E 1 c E 2 c | m ) ( x n , y n , u n ) : f m 1 ( u n ) ϕ m ( y n ) u n C m x n T P X s , ξ n P X n Y n U n | M ( x n , y n , u n | m ) = ( x n , y n , u n ) : f m 1 ( u n ) ϕ m ( y n ) u n C m x n T P X s , ξ n P X n | U n Y n M ( x n | u n , y n , m ) P Y n | U n M ( y n | u n , m ) P U n | M ( u n | m ) .
    Using M U n Y n , we have
    P X n Y n U n | M ( E 4 E 1 c E 2 c | m ) ( x n , y n , u n ) : f m 1 ( u n ) ϕ m ( y n ) u n C m x n T P X s , ξ n P X n | U n Y n M ( x n | u n , y n , m ) P Y n | U n ( y n | u n ) P U n | M ( u n | m ) = ( x n , y n , u n ) : f m 1 ( u n ) ϕ m ( y n ) u n C m x n T P X s , ξ n P X n | U n Y n M ( x n | u n , y n , m ) W s n ( y n | u n ) P U n | M ( u n | m ) ( x n , y n , u n ) : f m 1 ( u n ) ϕ m ( y n ) u n C m P X n | U n Y n M ( x n | u n , y n , m ) W s n ( y n | u n ) P U n | M ( u n | m ) = ( y n , u n ) : f m 1 ( u n ) ϕ m ( y n ) u n C m W s n ( y n | u n ) P U n | M ( u n | m ) = u n C m W s n ( ( ϕ m 1 ( f m 1 ( u n ) ) ) c | u n ) P U n | M ( u n | m ) .
    Thus, using Inequality (9), we have
    m M s ^ P M ( m ) P X n Y n U n | M ( E 4 E 1 c E 2 c | m ) ϵ
    for n large enough. Now, consider u n C m , m M . We get
    P U n | M E ( u n | m , 0 ) = x n X n P U n X n | M E ( u n , x n | m , 0 ) = s ^ S ^ x n T P X s ^ , ξ n P U n X n | M E ( u n , x n | m , 0 )
    as
    P U n X n | M E ( u n , x n | m , 0 ) = 0
    for x n s ^ S ^ T P X s ^ , ξ n . We realize that, for u n C m and x n s ^ S ^ T P X s ^ , ξ n ,
    P U n X n | M E ( u n , x n | m , 0 ) = P U n X n M E ( u n , x n , m , 0 ) P M E ( m , 0 ) = P U n X n ( u n , x n ) P M E ( m , 0 ) P M E | U n X n ( m , 0 | u n , x n ) = P U n X n ( u n , x n ) P M E ( m , 0 ) ,
    where the last step follows as
    P M E | U n X n ( m , 0 | u n , x n ) = 1 .
    Thus, we get
    P U n | M E ( u n | m , 0 ) = s ^ S ^ x n T P X s ^ , ξ n P U n X n ( u n , x n ) P M E ( m , 0 ) = s ^ S ^ x n T P X s ^ , ξ n P X s n ( x n ) V s ^ n ( u n | x n ) P M E ( m , 0 ) = s ^ S ^ p P ( n , X ) : | p ( x ) p X s ^ ( x ) | ξ x X x n T p n i = 1 n P X s ( x i ) V s ^ ( u i | x i ) P M E ( m , 0 ) .
    The last term is constant for all u n of the same type. Thus,
    P U n | M E ( u n | m , 0 ) = p C m
    is constant for u n C m . As
    P U n | M E ( u n | m , 0 ) = 0
    for u n C m , we have
    P U n | M E ( u n | m , 0 ) = 1 | C m |
    for u n C m . Now, we get
    P X n Y n U n | M E ( E 3 E 1 c E 2 c | m , 0 ) ( x n , y n , u n ) : u n C m x n T P X s , ξ n h m ( f m 1 ( u n ) ) = k ˜ P X n Y n U n | M E ( x n , y n , u n | m , 0 ) u n C m h m ( f m 1 ( u n ) ) = k ˜ P U n | M E ( u n | m , 0 ) = | h m 1 ( k ˜ ) | p C m .
    We have
    | h m 1 ( k ˜ ) | = | C m | min s ^ S ^ inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ ) | C m | min s ^ S ^ inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ )
    and get
    P X n Y n U n | M E ( E 3 E 1 c E 2 c | m , 0 ) 1 | C m | ( | C m | min s ^ S ^ inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ ) ( | C m | min s ^ S ^ inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ ) 1 ) ) = min s ^ S ^ inf s I ( s ^ ) exp n ( I ( U s ^ ; Y s ) δ ) | C m | 2 exp ( n ϵ ˜ )
    or
    P X n Y n U n | M E ( E 3 E 1 c E 2 c | m , 0 ) = 0
    respectively, if, for the source state s, it holds that s I ( s ^ ) for the s ^ corresponding to the smallest C m , s ^ . Here,
    ϵ ˜ = inf s I ( s ^ ) I ( U s ^ ; Y s ) min s ^ S ^ inf s I ( s ^ ) I ( U s ^ ; Y s ) .
    Thus, for n large enough,
    P e P X s U s ^ n ( ( T P X s U s ^ , ξ n ) c ) + η + ϵ + 2 exp ( n ϵ ˜ )
    and Inequality (5) is fulfilled for small enough constants and n large enough.
  • We define k ˜ : U n × M { 0 , 1 }
    k ˜ ( u n , m ) = 1 , for u n f m ( h m 1 ( k ˜ ) ) , 0 , otherwise ,
    and the RV K ˜ = k ˜ ( U n , M ) . We have
    P K | M E K ˜ ( k | m , 0 , 0 ) = P U n | M E K ˜ ( f m ( h m 1 ( k ) ) | m , 0 , 0 ) .
    Now, consider u n C m . It holds that
    P U n | M E K ˜ ( u n | m , 0 , 0 ) = P U n | M E ( u n | m , 0 ) P K ˜ | M E ( 0 | m , 0 ) P K ˜ | M E U n ( 0 | m , 0 , u n ) .
    We know
    P K ˜ | M E U n ( 0 | m , 0 , u n ) = 1
    for u n f m ( h m 1 ( k ˜ ) ) . Thus,
    P K | M E K ˜ ( k | m , 0 , 0 ) = P U n | M E ( h m 1 ( k ) | m , 0 ) P K ˜ | M E ( 0 | m , 0 ) = p C m | h m 1 ( k ) | P K ˜ | M E ( 0 | m , 0 )
    for all k K . This means
    P K | M E K ˜ ( k | m , 0 , 0 ) = 1 | K | ,
    as | h m 1 ( k ) | is constant for all k K . We also know
    H ( K | M = m , E = e , K ˜ = k ˜ ) = log | K |
    for P M E K ˜ ( m , e , k ˜ ) > 0 , ( e , k ˜ ) ( 0 , 0 ) as k is chosen according to a uniform distribution on K in this case. Thus,
    log | K | H ( K | M ) H ( K | M E K ˜ ) = ( m , e , k ˜ ) M × { 0 , 1 } × { 0 , 1 } P M E K ˜ ( m , e , k ˜ ) H ( K | M = m , E = e , K ˜ = k ˜ ) = log | K | .
    This means Equations (6) and (7) are fulfilled.
  • For the secret-key rate, we have
    1 n log | K | min s ^ S ^ inf s I ( s ^ ) I ( U s ^ ; Y s ) δ .
  • Finally, we analyse the privacy-leakage rate. We have
    I ( X n ; M ) = H ( M ) H ( M | X n ) H ( M | U n ) + H ( M | U n ) = I ( U n ; M ) H ( M | X n ) ,
    where we use H ( M | U n ) = 0 for the second equality (see ([3], Problem 3.1)). Now, we use
    P M E ( M s ^ , 0 ) P X n Y n U n ( E 1 c E 2 c ) = P X s Y s U s ^ n ( E 1 c E 2 c ) P X s Y s U s ^ n ( E 1 c ) + P X s Y s U s ^ n ( E 2 c ) 1 = P X s U s ^ n ( T P X s U s ^ , ξ n ) + P U s ^ n ( C ) 1 1 η P X s U s ^ n ( ( T P X s U s ^ , ξ n ) c ) 1 ζ
    for ζ > 0 and n large enough. We also use P U n | M E ( u n | m , 0 ) = 1 | C m | for u n C m and get
    H ( U n | M ) H ( U n | M E ) m M s ^ P M E ( m , 0 ) H ( U n | M = m , E = 0 ) ( 1 ζ ) ( min s I ( s ^ ) I ( U s ^ ; Y s ) δ ) n .
    Thus,
    I ( X n ; M ) H ( U n ) H ( M | X n ) n min s I ( s ^ ) I ( U s ^ ; Y s ) + n δ + ζ n min s I ( s ^ ) I ( U s ^ ; Y s ) .
    We now use
    I ( X n ; U n ) = H ( X s n ) H ( X n | U n ) H ( X s n ) H ( X n | U n T ) H ( X s n ) H ( X n | U n T = 1 ) ( 1 ϵ ) = H ( X s n ) H ( X s n | U s ^ n T = 1 ) ( 1 ϵ ) = H ( X s n ) H ( X s n | U s ^ n T = 1 ) ( 1 ϵ ) H ( X s n | U s ^ n T = 0 ) ϵ + H ( X s n | U s ^ n T = 0 ) ϵ I ( X s n ; U s ^ n T ) + ϵ log | X | n = ϵ log | X | n + I ( X s n ; U s ^ n ) + I ( T ; X s n | U s ^ n ) ϵ log | X | n + I ( X s n ; U s ^ n ) + log 2 ,
    where T = t ( X n ) , t : X n { 0 , 1 }
    t ( x n ) = 1 , x n T P X s , ξ n , 0 , else .
    Thus, ϵ is arbitrarily small for large n.
    Thus, we get
    I ( X n ; M ) H ( U n ) H ( M | X n ) n inf s I ( s ^ ) I ( U s ^ ; Y s ) + n δ + ζ n inf s I ( s ^ ) I ( U s ^ ; Y s ) + ϵ log | X | n + I ( X s n ; U s ^ n ) + log 2 I ( X n ; U n ) .
    Again, using ([3], Problem 3.1), we get
    H ( U n ) H ( M | X n ) I ( X n ; U n ) = H ( U n | X n ) H ( M | X n ) = H ( U n M | X n ) H ( M | X n ) = H ( U n | M X n ) .
    We also know that
    0 I ( U n ; Y n | X n M ) = H ( Y n | X n M ) H ( Y n | X n U n M ) = H ( Y n | X n M ) H ( Y n | X n U n ) H ( Y n | X n ) H ( Y n | X n U n ) = I ( Y n ; U n | X n ) = 0 .
    Here, we use ([3], Problem 3.1) and M X n Y n . Thus,
    I ( U n ; X n Y n M ) = I ( U n ; X n M ) = I ( U n ; Y n M ) + I ( U n ; X n | Y n M ) .
    Thus,
    I ( U n ; X n M ) I ( U n ; Y n M ) .
    It follows that
    H ( U n | M X n ) H ( U n | M Y n ) .
    Now, we bound the right hand side of Inequality (11) using Inequality (12) and use Fano’s inequality. Thus, we have
    1 n I ( X n ; M ) sup s I ( s ^ ) I ( X s ; U s ^ ) I ( U s ^ ; Y s ) ( 13 ) + δ + ζ I ( U s ^ ; Y s ) + ϵ log | X | + 1 n log 2 + P e log ( | U | 1 ) + h ( P e ) n .
    Here, we use
    I ( X s ; U s ^ ) inf s I ( s ^ ) I ( U s ^ ; Y s ) = sup s I ( s ^ ) I ( X s ; U s ^ ) I ( U s ^ ; Y s )
    as I ( X s ; U s ^ ) is constant for all s I ( s ^ ) .
Using these results, we conclude from Inequalities (10) and (13) that
R ( P S C A ) ( S ) U s ^ 1 , , U s ^ | S ^ | s ^ S ^ R s ^ ( P S C A ) ( S , U s ^ ) .
Using the distributive law for sets, we can see that this is equivalent to
R ( P S C A ) ( S ) s ^ S ^ U s ^ R s ^ ( P S C A ) ( S , U s ^ )
(see Appendix B). We now consider the converse. Assume X n Y n are distributed i.i.d. according to P X s Y s for an arbitrary s S . The following calculations hold for all s S . Similarly to the converse part of the proof of ([7], Theorem 3.1), we have
log | K | = ( a ) H ( K ) = I ( K ; K ^ ) + H ( K | K ^ ) ( b ) I ( K ; M Y n ) + F = I ( K ; M ) + I ( K ; Y n | M ) + F ( c ) I ( Y n ; M K ) + F = i = 1 n I ( K M Y i 1 ; Y i ) + F ,
where we use Equation (6) for (a), Fano’s inequality with F = δ n log | K | + 1 and the data processing inequality in combination with K M Y n K ^ , which follows from the definition of the compound authentication protocol for (b) and Equation (7) for (c). From the definition of the compound authentication protocol, we also know that Y n X n M K . Using the definition of Markov chains, this implies Y i 1 X i 1 M K Y i for all i { 2 n } (see Appendix C). (From Y n X i 1 X i n M K , we get Y i 1 Y i X i 1 M K X i n using Implications (A11) and (A13). Then, we use Implication (A12) to get Y i 1 X i 1 Y i M K and from this we get the desired result using Implication (A13).)
The equation
I ( Y i K M ; X i 1 Y i 1 ) = I ( Y i K M ; X i 1 )
is equivalent to Y i 1 X i 1 M K Y i ([3], Definition 3.9). This is equivalent to
H ( Y i | K M X i 1 Y i 1 ) = H ( Y i | K M X i 1 ) + ( H ( K M | X i 1 ) H ( K M | X i 1 Y i 1 ) ) .
Thus, H ( Y i | K M Y i 1 ) H ( Y i | K M X i 1 ) . Thus, we have
I ( K M Y i 1 ; Y i ) I ( K M X i 1 ; Y i ) ,
so
log | K | i = 1 n I ( K M X i 1 ; Y i ) + F .
Now, we define U i = K M X i 1 for all i { 1 n } . This implies U i X i Y i for all i { 1 n } , which can again be seen using the results from Appendix C. Let Q be a time sharing RV independent of all others and uniformly distributed on Q = { 1 n } and let U = Q U Q , X = X Q and Y = Y Q . Then,
P U X Y ( ( u , q ) , x , y ) = P Q U q X q Y q ( q , u , x , y ) = ( a ) P Q U q | X q ( u , q | x ) P X q Y q ( x , y )
for all ( u , q , x , y ) U q × Q × X × Y , where ( a ) follows from U q X q Y q and the independence of Q. We have
P X Y ( x , y ) = q , u P Q U q X q Y q ( q , u , x , y ) = i = 1 n 1 n P X i Y i ( x , y ) = ( a ) P X s Y s ( x , y ) = P X q Y q ( x , y )
for an arbitrary q Q and ( x , y ) X × Y , where ( a ) follows as P X i Y i = P X s Y s for all i Q as the RVs X n Y n are generated i.i.d. We also have for all ( u , q , x ) U q × Q × X
P U | X ( u , q | x ) = y Y P Q U q X q Y q ( q , u , x , y ) P X ( x ) = P Q U q X q ( q , u , x ) P X q ( x ) = P Q U q | X q ( q , u | x ) .
Thus, P U X Y ( ( u , q ) , x , y ) = P X Y ( x , y ) P U | X ( u , q | x ) , which means U X Y . We also have
log | K | i = 1 n I ( U i , Y i ) + F = n i = 1 n 1 n I ( U Q , Y | Q = i ) + F = n I ( U Q ; Y | Q ) + F = n H ( Y | Q ) H ( Y | U Q Q ) + F n ( H ( Y ) H ( Y | U Q Q ) ) + F = n I ( U Q Q ; Y ) + F = n I ( U ; Y ) + F .
Thus, using the definition of F, we get
1 n log | K | ( 1 δ ) 1 ( I ( U ; Y ) + 1 n ) ,
which implies
1 n log | K | I ( U ; Y ) + δ
for δ > 0 and n large enough. We also consider
I ( X n ; M ) = H ( M ) H ( M | X n ) H ( M | Y n ) H ( K M | X n ) = H ( K M | Y n ) H ( K | Y n M ) H ( K M | X n ) .
From the definition of the compound storage model, we know K M Y n K ^ . Using the data processing inequality, we get I ( K ; M Y n ) I ( K ; K ^ ) , which means H ( K | M Y n ) H ( K | K ^ ) F , where the last inequality follows from Fano’s inequality. Thus,
I ( X n ; M ) H ( K M | Y n ) H ( K M | X n ) F = I ( K M ; X n ) I ( K M ; Y n ) F = i = 1 n I ( K M ; X i | X i 1 ) i = 1 n I ( K M ; Y i | Y i 1 ) F = ( a ) i = 1 n I ( K M X i 1 ; X i ) i = 1 n k I ( K M Y i 1 ; Y i ) F ( b ) i = 1 n I ( K M X i 1 ; X i ) i = 1 n I ( K M X i 1 ; Y i ) F ,
where (a) follows as X i and Y i are i.i.d. and (b) follows from Inequality (14). With our definition of U, X and Y and the same argumentation as before, we get
1 n I ( X n ; M ) I ( U ; X ) I ( U ; Y ) F n ( a ) I ( U ; X ) I ( U ; Y ) δ
for n large enough, where, for ( ( a ) , we use the definition of F and Inequality (16). We have for all ( u , q , x ) U q × Q × X
P U X ( ( q , u ) , x ) = P Q ( q ) P U q X q ( u , x ) = P K M X q 1 X q ( k , m , x q 1 , x q ) P Q ( q ) = P Q ( q ) x q + 1 n P K M X n ( k , m , x n ) = ( a ) P Q ( q ) x q + 1 n P X n ( x n ) P M | X n ( m | x n ) P K | X n ( k | x n ) = P Q ( q ) x q + 1 n P X n ( x n ) Θ ( x n ) Φ ( x n ) ,
where (a) follows from M X n K , which follows from the definition of the compound authentication protocol. As P X n is the same for all s I ( s ^ ) , s ^ S ^ , this result implies that P U X is the same for all s I ( s ^ ) , s ^ S ^ . We get the bounds (16) and (17) for each s S . We denote the corresponding RVs U X Y by U s X s Y s for all s S . The joint distribution of X s Y s is P X s Y s S as we see from Equation (15). Thus, Equation (18) and the Inequalities (16) and (17) for all s S imply
R P S C S ( S ) U s ^ 1 , , U s ^ | S ^ | s ^ S ^ R s ^ ( P S C A ) ( S , U s ^ ) .
We again use the distributive law for sets to get our result. The bounds on the cardinality of the alphabet of the auxiliary random variables can be derived as in [19]. ☐
Remark 9.
This result implies Theorem 2 as we use a deterministic decoder for the achievability proof.
Remark 10.
In [19], the authors also derive the compound capacity region for | S | < , but, in contrast to this work, they consider deterministic protocols and require strong secrecy instead of perfect secrecy when defining achievability. This compound capacity region equals R P S C A ( S ) .

8. Secure Storage

We now discuss some other applications of the already proven results apart from authentication. For this purpose, we take a look at some results for secure storage from [13,14], which follow directly from our results for authentication. Here, we again consider compound sets S with | S ^ | < .
In [13], we consider the following model for secure storage with source uncertainty, where the corresponding scenario is depicted in Figure 3.
Definition 10.
Let n N . The compound storage model consists of a set S P ( X × Y ) of DMMSs with generic variables X s Y s , s S , (all on the same alphabets X and Y ), a source P D n P ( D n ) that puts out a RV D n , the (possibly randomized) encoder Φ n : X n × D n M and the (possibly randomized) decoder Ψ n : Y n × M D ^ n . Let X n and Y n be the output of one of the DMMSs in S , i.e., P X Y = P X s Y s for an s S , but s is not known. D n is independent of X n Y n . The RV M is generated from X n and D n using Φ n . The RV D ^ n is generated from Y n and M using Ψ n . We use the term compound storage protocol for ( Φ n , Ψ n ) . Additionally, it holds that, for all δ > 0 , there is an n 0 = n 0 ( δ ) such that for all n n 0
1 n D ( P D n U D n ) < δ .
We define achievability for this model.
Definition 11.
A tuple ( R , L ) , R , L 0 , is an achievable storage rate versus privacy-leakage rate pair for the compound storage model if for every δ > 0 there is an n 0 = n 0 ( δ ) such that for all n n 0 there exists a compound storage protocol such that for all s S
Pr ( D n = D ^ n ) 1 δ , I ( M ; D n ) = 0 , 1 n log | D n | R δ , 1 n I ( M ; X n ) L + δ ,
where P X Y = P X s Y s . We denote the corresponding storage protocols by PSCS-Protocols (Perfect-Secrecy- Compound-Storage-Protocols).
Definition 12.
The set of achievable rate pairs that are achievable using PSCS-Protocols is called the compound capacity region R P S C S ( S ) .
We then can prove the following result.
Theorem 6.
It holds that
R P S C S ( S ) = R P S C A ( S ) .
Remark 11.
The compound storage model is essentially equivalent to a compound version of the chosen secret system in [7]. For this reason, Theorem 6 follows using the same approach as the authors of [7].
We combine source compression and secure storage in [14] by considering the following model, which models the scenario depicted in Figure 4.
Definition 13.
Let k , n k N . The compound source storage model consists of a set S P ( X × Y ) of DMMSs with generic variables X s Y s , s S , (all on the same alphabets X and Y ), a general source V [20] that fulfills the strong converse property, the (possibly randomized) encoder Φ k : X n k × V k M and the (possibly randomized) decoder Ψ k : Y n k × M V ^ k . Let X n k and Y n k be the output of one of the DMMSs in S , i.e., P X Y = P X s Y s for an s S , but s is not known. The RV M is generated from X n k and V k using Φ k . The RV V ^ k is generated from Y n k and M using Ψ k . We use the term compound source storage protocol for ( Φ k , Ψ k ) .
For this model, we define achievability where we consider the output of the PUF source as a resource.
Definition 14.
A tuple ( B , L ) , B , L 0 , is an achievable performance pair for the compound source storage model if, for every δ > 0 , there is a k 0 = k 0 ( δ ) such that, for all k k 0 , there exists a compound source storage protocol such that, for all s S ,
Pr ( V k = V ^ k ) 1 δ , I ( M ; V k ) = 0 , n k k B + δ , 1 n k I ( M ; X n k ) L + δ ,
where P X Y = P X s Y s . We denote the corresponding compound source storage protocols by PSCSS-Protocols (Perfect-Secrecy-Compound-Source-Storage-Protocols).
Definition 15.
The set of achievable performance pairs that are achievable using PSCSS-Protocols is called the optimal performance region R P S C S S ( S , V ) .
We then can prove the following results.
Theorem 7.
It holds that
R P S C S S ( S , V ) s ^ S ^ U s ^ { ( B , L ) : B H ¯ ( V ) inf s I ( s ^ ) I ( U s ^ ; Y s ) , L sup s I ( s ^ ) I ( U s ^ ; X s ) I ( U s ^ ; Y s ) } = ( a ) s ^ S ^ U s ^ R s ^ ( P S C S S ) ( S , V , U s ^ ) ,
where for ( a ) we define R s ^ ( P S C S S ) ( S , V , U s ^ ) appropriately. For all s ^ S ^ , the union is over all RVs U s ^ such that, for all s I ( s ^ ) , we have U s ^ X s Y s .
Theorem 8.
For stationary ergodic sources V , it holds that
R P S C S S ( S , V ) = s ^ S ^ U s ^ R s ^ ( P S C S S ) ( S , V , U s ^ ) .
For all s ^ S ^ , the union is over all RVs U s ^ such that, for all s I ( s ^ ) , we have U s ^ X s Y s . For | S | < , we only have to consider RVs U s ^ with | U s ^ | | X | + | I ( s ^ ) | .

9. Conclusions

We derived the capacity region for the (compound) authentication model requiring perfect secrecy and uniform distribution of the key generated for authentication and compared the result to existing results where only strong secrecy and a weaker condition on the key distribution is required. The two capacity regions are the same. We could prove this result by allowing for randomized encoders, which are not necessarily used when deriving the capacity region corresponding to the weaker definition of achievability. We saw that we can use the results for authentication to prove corresponding results for secure storage.
As already mentioned, compound sources do not only model source uncertainty but also model attacks where an attacker can influence parameters of the source while the legitimate parties do not know which parameters the attacker chose. It is essential that in this scenario the parameter is constant for all symbols read from the source. An attack where the parameter can be varied while the source is used is fundamentally stronger. A characterization of achievable rates for this attack scenario is not known, except for the source model for secret key generation, which has been derived in [21]. For an overview of these types of attacks, see [22]. Recently, the corresponding problem for wiretap channels could be solved [23,24]. For the source model, the attacker can choose his strategy depending on the public data, which is a difficulty that does not appear for wiretap channels. Nevertheless the authors hope that, using techniques from the works concerning the wiretap channel, the open problem for the source model can be solved.

Acknowledgments

Funding is acknowledged from the German Research Foundation (DFG) via grant BO 1734/20-1 and from the Federal Ministry of Education and Research (BMBF) via grant 16KIS0118K. Holger Boche would like to thank Rainer Plaga, Federal Office for Information Security (BSI), for the discussion on PUFs and issues concerning different secrecy measures.

Author Contributions

Sebastian Baur and Holger Boche conceived this study and derived the results. Sebastian Baur wrote the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 4

Proof. 
We prove the result for compound codes with the additional constraint on the decoding sets that, for ζ > 0 , it holds that
ϕ 1 ( m ) W W T W , ζ n ( f ( m ) )
for all messages m M f . Additionally, for ζ > 0 , we require
f ( m ) A ˜ = A T P , ζ n
for all m M f . First, consider the case that W is a finite set. Let ( f , ϕ ) be such a code that can not be extended. Thus, for all x n A ˜ , there is a W W such that
W n ( W ˜ W T W ˜ , ζ n ( x n ) \ B | x n ) < 1 ϵ ,
where B = m M f ϕ 1 ( m ) . It also holds that
P n ( A ˜ ) P n ( A ) + P n ( T P , ζ n ) 1 η / 2
for n large enough. We now consider the set
A ˜ W = { x n A ˜ : W n ( W ˜ W T W ˜ , ζ n ( x n ) \ B | x n ) < 1 ϵ } .
We know W W A ˜ W = A ˜ , as for all x n A ˜ there is at least one W W with Inequality (A3). Thus,
η / 2 P n ( W W A ˜ W ) W W P n ( A ˜ W ) | W | max W W P n ( A ˜ W ) .
Thus, there is a W ¯ W such that for all x n A ˜ W ¯
W ¯ n ( W ˜ W T W ˜ , ζ n ( x n ) \ B | x n ) < 1 ϵ
and
P n ( A ˜ W ¯ ) η 2 | W | .
Thus,
W ¯ n ( B c | x n ) + W ¯ n ( W ˜ W T W ˜ , ζ n ( x n ) | x n ) W ¯ n ( B c W ˜ W T W ˜ , ζ n ( x n ) | x n ) = W ¯ n ( W ˜ W T W ˜ , ζ n ( x n ) \ B | x n ) < 1 ϵ ,
which means
W ¯ n ( B | x n ) > ϵ δ
for all x n A ˜ W ¯ as W ¯ n ( B c | x n ) = 1 W ¯ n ( B | x n ) , W ¯ n ( W ˜ W T W ˜ , ζ n ( x n ) | x n ) 1 δ for δ > 0 and n large enough and W ¯ n ( B c W ˜ W T W ˜ , ζ n ( x n ) | x n ) 1 . Thus, we have
W ¯ n ( B x ¯ n T P , ζ n T W ¯ , ζ n ( x ¯ n ) | x n ) W ¯ n ( B | x n ) + W ¯ n ( x ¯ n T P , ζ n T W ¯ , ζ n ( x ¯ n ) | x n ) 1 ϵ δ + ( 1 ξ ) 1 = ϵ ξ δ
for all x n A ˜ W ¯ , ξ > 0 and n large enough. (We choose ϵ , δ and ξ such that ϵ ξ δ > 0 .) Thus, B = B x ¯ n T P , ζ n T W ¯ , ζ n ( x ¯ n ) is an ϵ ξ δ image of A ˜ W ¯ (see [3]). Thus,
| B x ¯ n T P , ζ n T W ¯ , ζ n ( x ¯ n ) | g W ¯ n ( A ˜ W ¯ , ϵ ξ δ ) ,
where g W ¯ n ( A ˜ W ¯ , ϵ ξ δ ) is defined as in [3]. We have
( P W ¯ ) n ( B ) = y n B i = 1 n a X P ( a ) W ¯ ( y i | a ) = ( a ) y n B x n X n i = 1 n P ( x i ) W ¯ ( y i | x i ) y n B x n A ˜ W ¯ P n ( x n ) W ¯ n ( y n | x n ) = x n A ˜ W ¯ P n ( x n ) y n B W ¯ n ( y n | x n ) ( ϵ δ ξ ) P n ( A ˜ W ¯ ) η / 2 ( ϵ δ ξ ) 1 | W | ,
where (a) can be shown with induction. Using ([3], Lemma 2.14), we get for n large enough
1 n log | B | H ( P W ¯ ) ( γ + 1 n log | W | )
with γ > 0 . Additionally, we have
| B | = ( a ) | m M f ϕ 1 ( m ) x n T P , ζ n T W ¯ , ζ n ( x n ) | = | m M f ϕ 1 ( m ) x n T P , ζ n T W ¯ , ζ n ( x n ) | m M f | ϕ 1 ( m ) x n T P , ζ n T W ¯ , ζ n ( x n ) | ( b ) m M f | W W T W , ζ n ( f ( m ) ) x n T P , ζ n T W ¯ , ζ n ( x n ) | ,
where (a) follows from the definition of B and (b) follows from Subset Relationship (A1). We now define
W m * = { W W : T W , ζ n ( f ( m ) ) x n T P , ζ n T W ¯ , ζ n ( x n ) } .
As
T W ¯ , ξ n ( f ( m ) ) x n T P , ζ n T W ¯ , ζ n ( x n )
for all m M f , which follows form Relation (A2), we have
| B | m M f max W W m * | T W , ζ n ( f ( m ) ) | · | W | .
Let
W * = arg max W m M f W m * | T W , ζ n ( f ( m ) ) | .
Thus, we get the upper bound
| B | | M f | exp ( n ( H ( W * | P ) + γ + log | W | n ) ) ,
γ > 0 ([3], Lemma 2.13).
For all W W m * and all m M f there is a y n Y n such that y n T W , ζ n ( f ( m ) ) and y n T W ¯ , ζ n ( x n ) for a x n T P , ζ n . Using Relation (A2), we have y n T P W , ( ζ + ζ ) | X | n and y n T P W ¯ , ( ζ + ζ ) | X | n (see ([3], Lemma 2.10)). Let ζ = ( ζ + ζ ) | X | . Thus,
P W P W ¯ 1 = b Y | P W ( b ) P W ¯ ( b ) | = b Y | P W ( b ) N ( b | y n ) / n + N ( b | y n ) / n P W ¯ ( b ) | b Y | P W ( b ) N ( b | y n ) / n | + | N ( b | y n ) / n P W ¯ ( b ) | 2 | Y | ζ .
Using ([3], Lemma 2.7), we have | H ( P W ) H ( P W ¯ ) | 2 | Y | ζ log 1 2 ζ for all W W m * and all m M f . Using Inequalities (A4), (A5) and the fact that W * W m * for a m M f , we get for γ , γ , ζ and ζ small enough and n large enough
1 n log | M f | H ( P W ¯ ) H ( W * | P ) γ γ 2 log | W | n H ( P W * ) 2 | Y | ζ log 1 2 ζ H ( W * | P ) γ γ 2 log | W | n I ( P ; W * ) τ min W W I ( P ; W ) τ .
Now, consider the case of an infinite set W . Let M N , M 2 | Y | 2 . We construct the set W * of channels W * : X Y with the following properties. For all W W , there is a W * W * with
| W ( y | x ) W * ( y | x ) | | Y | M
for all ( x , y ) X × Y ,
W ( y | x ) W * ( y | x ) e 2 | Y | 2 / M
for all ( x , y ) X × Y and
| W * | ( 1 + M ) | X | | Y | .
Such a construction is possible as described in [18]. Using Inequalities (A9) and (A6), we know that there is a compound ( n , ϵ ) -code, ϵ > ϵ > 0 , for W * with
1 n log | M f | min W W * I ( P ; W ) τ
if M depends on n polynomially. We now show that this code is a compound ( n , ϵ ) -code for W with
1 n log | M f | inf W W I ( P ; W ) τ .
Let W * = arg min W W * I ( P ; W ) and let W W be the W corresponding to W * . Then, we have
inf W W I ( P ; W ) ( a ) I ( P ; W ) ( b ) I ( P ; W * ) + β = ( c ) min W W * I ( P ; W ) + β ,
β > 0 , where (a) follows from the definition of the infimum, (b) follows as Inequality (A7) implies
W ( · | a ) W * ( · | a ) 1 | Y | 2 M
for all a X . Thus, using ([3], Lemma 2.7), we have
| I ( P ; W ) I ( P ; W * ) | = | H ( W | P ) H ( W * | P ) | = | a X P ( a ) ( H ( W ( · | a ) ) H ( W * ( · | a ) ) ) | a X P ( a ) | H ( W ( · | a ) ) H ( W * ( · | a ) ) | | Y | 2 M log M | Y | .
For M = n 2 , we get (b) for n large enough. Finally, (c) follows from the choice of W * . Additionally, it holds that for each W W there is a W * W * with
W n ( y n | x n ) e 2 | Y | 2 n / M ( W * ) n ( y n | x n ) ,
which follows from Inequality (A8). Thus, for all m M f , we have
W n ( ( ϕ 1 ( m ) ) c | f ( m ) ) ( W * ) n ( ( ϕ 1 ( m ) ) c | f ( m ) ) e 2 | Y | 2 n / M a ) e 2 | Y | 2 / n ϵ ,
where (a) follows from our choice of M. Thus, for n large enough and ϵ small enough, we have
W n ( ( ϕ 1 ( m ) ) c | f ( m ) ) ϵ .
 ☐

Appendix B. Equivalence of Rate Regions

We have
U s ^ 1 , , U s ^ | S ^ | s ^ S ^ R s ^ ( P S C A ) ( S , U s ^ ) = ( a ) U s ^ 1 U s ^ 2 , , U s ^ | S ^ | s ^ S ^ \ { s ^ 1 } R s ^ ( S , U s ^ ) R s ^ 1 ( S , U s ^ 1 ) ,
where we drop the ( P S C A ) for a shorter notation in (a). We now use the distributive law for sets and get
U s ^ 1 U s ^ 2 , , U s ^ | S ^ | s ^ S ^ \ { s ^ 1 } R s ^ ( S , U s ^ ) R s ^ 1 ( S , U s ^ 1 ) .
Now, we use the distributive law again and get
U s ^ 2 , , U s ^ | S ^ | s ^ S ^ \ { s ^ 1 } R s ^ ( S , U s ^ ) U s ^ 1 R s ^ 1 ( S , U s ^ 1 ) .
Following these steps for all s ^ S ^ , we get
s ^ S ^ U s ^ R s ^ ( P S C S ) ( S , U s ^ ) .

Appendix C. Modifying Markov Chains

Theorem A1.
Let A, B, C and D be jointly distributed RVs. It holds that
( A 10 ) A B C C B A , ( A 11 ) A B C D B C D , ( A 12 ) A B C D A B C D , P A B C ( a , b , c ) = P A B ( a , b ) P C ( c ) ( a , b , c ) A × B × C , ( A 13 ) A B C D A B C D .
Proof. 
We give a proof for each of the statements.
  • We have
    P A B C ( a , b , c ) = ( a ) P A | B ( a | b ) P B C ( b , c ) = P A | B ( a | b ) P C | B ( c | b ) P B ( b ) = P A B ( a , b ) P C | B ( c | b )
    for all ( a , b , c ) A × B × C . Here, ( a ) follows from A B C . Thus, we see that Equivalence (A10) is true.
  • We have P A B C D ( a , b , c , d ) = P A B | C ( a , b | c ) P C D ( c , d ) for all ( a , b , c , d ) A × B × C × D from A B C D . Summing both sides over all b B , we get Implication (A11).
  • We have
    P A B C D ( a , b , c , d ) = ( a ) P A B | C ( a , b | c ) P C D ( c , d ) = P B | C ( b , c ) P A | B C ( a | b , c ) P C D ( c , d ) = ( b ) P A | B C ( a | b , c ) P B | C D ( b | c , d ) P C D ( c , d ) = P A | B C ( a | b , c ) P B C D ( b , c , d )
    for all ( a , b , c , d ) A × B × C × D , where ( a ) follows from A B C D and ( b ) from Implication (A11). This means Implication (A12) is true.
  • We have
    P A B C D ( a , b , c , d ) = ( a ) P A | B C ( a | b , c ) P B C D ( b , c , d ) = P A | B C ( a | b , c ) P D | B C ( d | b , c ) P B C ( b , c ) = ( b ) P A B ( a , b ) P C ( c ) P D | B C ( d | b , c ) = P A | B ( a | b ) P B ( b ) P C ( c ) P D | B C ( d | b , c ) = ( c ) P A | B ( a | b ) P B C ( b , c ) P D | B C ( d | b , c ) = P A | B ( a | b ) P B C D ( b , c , d )
    for all ( a , b , c , d ) A × B × C × D , where ( a ) follows from A B C D and ( b ) and ( c ) follow as C is independent of A B . Thus, we have Implication (A13).
 ☐

References

  1. Schaefer, R.F.; Boche, H.; Khisti, A.; Poor, H.V. Information Theoretic Security and Privacy of Information Systems; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  2. Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J. 1949, 28, 656–715. [Google Scholar] [CrossRef]
  3. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  4. Wyner, A.D. The wire-tap channel. Bell Syst. Tech. J. 1975, 54, 1355–1387. [Google Scholar] [CrossRef]
  5. Bloch, M.; Barros, J. Physical-Layer Security: From Information Theory to Security Engineering; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  6. Ahlswede, R.; Csiszár, I. Common randomness in information theory and cryptography. Part I: Secret sharing. IEEE Trans. Inf. Theory 1993, 39, 1121–1132. [Google Scholar] [CrossRef]
  7. Ignatenko, T.; Willems, F.M. Biometric security from an information theoretical perspective. Found. Trends Commun. Inf. Theory 2012, 7, 135–316. [Google Scholar] [CrossRef]
  8. Grigorescu, A.; Boche, H.; Schaefer, R.F. Robust PUF based authentication. In Proceedings of the IEEE International Workshop on Information Forensics and Security (WIFS), Rome, Italy, 16–19 November 2015; pp. 1–6. [Google Scholar]
  9. Lai, L.; Ho, S.-W.; Poor, H.V. Privacy-security tradeoffs in biometric security systems. In Proceedings of the 46th Annual Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, USA, 23–26 September 2008; pp. 268–273. [Google Scholar]
  10. Boche, H.; Wyrembelski, R.F. Secret key generation using compound sources-optimal key-rates and communication costs. In Proceedings of the 2013 9th International ITG Conference on Systems, Communication and Coding (SCC), München, Germany, 21–24 January 2013. [Google Scholar]
  11. Grigorescu, A.; Boche, H.; Schaefer, R.F. Robust Biometric Authentication from an Information Theoretic Perspective. Entropy 2017, 19, 480. [Google Scholar] [CrossRef]
  12. Baur, S.; Boche, H. Robust authentication and data storage with perfect secrecy. In Proceedings of the 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Atlanta, GA, USA, 1–4 May 2017; pp. 553–558. [Google Scholar]
  13. Baur, S.; Boche, H. Robust Secure Storage of Data Sources with Perfect Secrecy. In Proceedings of the IEEE Workshop on Information Forensics and Security, Rennes, France, 4–7 December 2017. [Google Scholar]
  14. Baur, S.; Boche, H. Storage of general data sources on a public database with security and privacy constraints. In Proceedings of the 2017 IEEE Conference on Communications and Network Security (CNS), Las Vegas, NV, USA, 9–11 October 2017; pp. 555–559. [Google Scholar]
  15. Willems, F.; Ignatenko, T. Authentication based on secret-key generation. In Proceedings of the 2012 IEEE International Symposium on Information Theory Proceedings (ISIT), Cambridge, MA, USA, 1–6 July 2012; pp. 1792–1796. [Google Scholar]
  16. Gallager, R. Information Theory and Reliable Communication; Springer: Berlin, Germany, 1968. [Google Scholar]
  17. Wolfowitz, J. Coding Theorems of Information Theory; Springer: Berlin, Germany, 1978. [Google Scholar]
  18. Blackwell, D.; Breiman, L.; Thomasian, A.J. The capacity of a class of channels. Ann. Math. Stat. 1959, 30, 1229–1241. [Google Scholar] [CrossRef]
  19. Tavangaran, N.; Baur, S.; Grigorescu, A.; Boche, H. Compound biometric authentication systems with strong secrecy. In Proceedings of the 2017 11th International ITG Conference on Systems, Communication and Coding (SCC), Hamburg, Germany, 6–9 February 2017. [Google Scholar]
  20. Han, T.S. Information-Spectrum Methods in Information Theory; Springer Science & Business Media: New York, NY, USA, 2013; Volume 50. [Google Scholar]
  21. Boche, H.; Cai, N. Common Random Secret Key Generation on Arbitrarily Varying Source. In Proceedings of the 23rd International Symposium on Mathematical Theory of Networks and Systems (MTNS2018), Hong Kong, China, 16–20 July 2018. in press. [Google Scholar]
  22. Schaefer, R.F.; Boche, H.; Poor, H.V. Secure Communication Under Channel Uncertainty and Adversarial Attacks. Proc. IEEE 2015, 103, 1796–1813. [Google Scholar] [CrossRef]
  23. Wiese, M.; Nötzel, J.; Boche, H. A Channel Under Simultaneous Jamming and Eavesdropping Attack—Correlated Random Coding Capacities Under Strong Secrecy Criteria. IEEE Trans. Inf. Theory 2016, 62, 3844–3862. [Google Scholar] [CrossRef]
  24. Nötzel, J.; Wiese, M.; Boche, H. The Arbitrarily Varying Wiretap Channel—Secret Randomness, Stability, and Super-Activation. IEEE Trans. Inf. Theory 2016, 62, 3504–3531. [Google Scholar] [CrossRef]
Figure 1. Authentication process considered in [7].
Figure 1. Authentication process considered in [7].
Cryptography 02 00008 g001
Figure 2. Authentication process with source uncertainty (as considered in [12]).
Figure 2. Authentication process with source uncertainty (as considered in [12]).
Cryptography 02 00008 g002
Figure 3. Secure storage process with source uncertainty (as considered in [13]).
Figure 3. Secure storage process with source uncertainty (as considered in [13]).
Cryptography 02 00008 g003
Figure 4. Secure storage of a source (as considered in [14]).
Figure 4. Secure storage of a source (as considered in [14]).
Cryptography 02 00008 g004

Share and Cite

MDPI and ACS Style

Baur, S.; Boche, H. Robust Secure Authentication and Data Storage with Perfect Secrecy. Cryptography 2018, 2, 8. https://doi.org/10.3390/cryptography2020008

AMA Style

Baur S, Boche H. Robust Secure Authentication and Data Storage with Perfect Secrecy. Cryptography. 2018; 2(2):8. https://doi.org/10.3390/cryptography2020008

Chicago/Turabian Style

Baur, Sebastian, and Holger Boche. 2018. "Robust Secure Authentication and Data Storage with Perfect Secrecy" Cryptography 2, no. 2: 8. https://doi.org/10.3390/cryptography2020008

Article Metrics

Back to TopTop